Adema
The Ethics of Emergent Creativity: Can We Move Beyond Writing as Human Enterprise, Commodity and Innovation?
2019


# 3\. The Ethics of Emergent Creativity: Can We Move Beyond Writing as Human
Enterprise, Commodity and Innovation?

Janneke Adema

© 2019 Janneke Adema, CC BY 4.0
[https://doi.org/10.11647/OBP.0159.03](https://doi.org/10.11647/OBP.0159.03)

In 2013, the Authors’ Licensing & Collecting Society
(ALCS)[1](ch3.xhtml#footnote-152) commissioned a survey of its members to
explore writers’ earnings and contractual issues in the UK. The survey, the
results of which were published in the summary booklet ‘What Are Words Worth
Now?’, was carried out by Queen Mary, University of London. Almost 2,500
writers — from literary authors to academics and screenwriters — responded.
‘What Are Words Worth Now?’ summarises the findings of a larger study titled
‘The Business Of Being An Author: A Survey Of Authors’ Earnings And
Contracts’, carried out by Johanna Gibson, Phillip Johnson and Gaetano Dimita
and published in April 2015 by Queen Mary University of
London.[2](ch3.xhtml#footnote-151) The ALCS press release that accompanies the
study states that this ‘shocking’ new research into authors’ earnings finds a
‘dramatic fall, both in incomes, and the number of those working full-time as
writers’.[3](ch3.xhtml#footnote-150) Indeed, two of the main findings of the
study are that, first of all, the income of a professional author (which the
research defines as those who dedicate the majority of their time to writing)
has dropped 29% between 2005 and 2013, from £12,330 (£15,450 in real terms) to
just £11,000. Furthermore, the research found that in 2005 40% of professional
authors earned their incomes solely from writing, where in 2013 this figure
had dropped to just 11.5%.[4](ch3.xhtml#footnote-149)

It seems that one of the primary reasons for the ALCS to conduct this survey
was to collect ‘accurate, independent data’ on writers’ earnings and
contractual issues, in order for the ALCS to ‘make the case for authors’
rights’ — at least, that is what the ALCS Chief Executive Owen Atkinson writes
in the introduction accompanying the survey, which was sent out to all ALCS
members.[5](ch3.xhtml#footnote-148) Yet although this research was conducted
independently and the researchers did not draw conclusions based on the data
collected — in the form of policy recommendations for example — the ALCS did
frame the data and findings in a very specific way, as I will outline in what
follows; this framing includes both the introduction to the survey and the
press release that accompanies the survey’s findings. Yet to some extent this
framing, as I will argue, is already apparent in the methodology used to
produce the data underlying the research report.

First of all, let me provide an example of how the research findings have been
framed in a specific way. Chief Executive Atkinson mentions in his
introduction to the survey that the ALCS ‘exists to ensure that writers are
treated fairly and remunerated appropriately’. He continues that the ALCS
commissioned the survey to collect ‘accurate, independent data,’ in order to
‘make the case for writers’ rights’.[6](ch3.xhtml#footnote-147) Now this focus
on rights in combination with remuneration is all the more noteworthy if we
look at an earlier ALCS funded report from 2007, ‘Authors’ Earnings from
Copyright and Non-Copyright Sources: a Survey of 25,000 British and German
Writers’. This report is based on the findings of a 2006 writers’ survey,
which the 2013 survey updates. The 2007 report argues conclusively that
current copyright law has empirically failed to ensure that authors receive
appropriate reward or remuneration for the use of their
work.[7](ch3.xhtml#footnote-146) The data from the subsequent 2013 survey show
an even bleaker picture as regards the earnings of writers. Yet Atkinson
argues in the press release accompanying the findings of the 2013 survey that
‘if writers are to continue making their irreplaceable contribution to the UK
economy, they need to be paid fairly for their work. This means ensuring
clear, fair contracts with equitable terms and a copyright regime that support
creators and their ability to earn a living from their
creations’.[8](ch3.xhtml#footnote-145) Atkinson does not outline what this
copyright regime should be, nor does he draw attention to how this model could
be improved. More importantly, the fact that a copyright model is needed to
ensure fair pay stands uncontested for Atkinson and the ALCS — not surprising
perhaps, as protecting and promoting the rights of authors is the primary
mission of this member society. If there is any culprit to be held responsible
for the study’s ‘shocking’ findings, it is the elusive and further undefined
notion of ‘the digital’. According to Atkinson, digital technology is
increasingly challenging the mission of the ALCS to ensure fair remuneration
for writers, since it is ‘driving new markets and leading the copyright
debate’.[9](ch3.xhtml#footnote-144) The 2013 study is therefore, as Atkinson
states ‘the first to capture the impact of the digital revolution on writers’
working lives’.[10](ch3.xhtml#footnote-143) This statement is all the more
striking if we take into consideration that none of the questions in the 2013
survey focus specifically on digital publishing.[11](ch3.xhtml#footnote-142)
It therefore seems that — despite earlier findings — the ALCS has already
decided in advance what ‘the digital’ is and that a copyright regime is the
only way to ensure fair remuneration for writers in a digital context.

## Creative Industries

This strong uncontested link between copyright and remuneration can be traced
back to various other aspects of the 2015 report and its release. For example,
the press release draws a strong connection between the findings of the report
and the development of the creative industries in the UK. Again, Atkinson
states in the press release:

These are concerning times for writers. This rapid decline in both author
incomes and in the numbers of those writing full-time could have serious
implications for the economic success of the creative industries in the
UK.[12](ch3.xhtml#footnote-141)

This connection to the creative industries — ‘which are now worth £71.4
billion per year to the UK economy’,[13](ch3.xhtml#footnote-140) Atkinson
points out — is not surprising where the discourse around creative industries
maintains a clear bond between intellectual property rights and creative
labour. As Geert Lovink and Ned Rossiter state in their MyCreativity Reader,
the creative industries consist of ‘the generation and exploitation of
intellectual property’.[14](ch3.xhtml#footnote-139) Here they refer to a
definition created as part of the UK Government’s Creative Industries Mapping
Document,[15](ch3.xhtml#footnote-138) which states that the creative
industries are ‘those industries which have their origin in individual
creativity, skill and talent and which have a potential for wealth and job
creation through the generation and exploitation of intellectual property’.
Lovink and Rossiter point out that the relationship between IP and creative
labour lies at the basis of the definition of the creative industries where,
as they argue, this model of creativity assumes people only create to produce
economic value. This is part of a larger trend Wendy Brown has described as
being quintessentially neoliberal, where ‘neoliberal rationality disseminates
the model of the market to all domains and activities’ — and this includes the
realm of politics and rights.[16](ch3.xhtml#footnote-137) In this sense the
economization of culture and the concept of creativity is something that has
become increasingly embedded and naturalised. The exploitation of intellectual
property stands at the basis of the creative industries model, in which
cultural value — which can be seen as intricate, complex and manifold —
becomes subordinated to the model of the market; it becomes economic
value.[17](ch3.xhtml#footnote-136)

This direct association of cultural value and creativity with economic value
is apparent in various other facets of the ALCS commissioned research and
report. Obviously, the title of the initial summary booklet, as a form of
wordplay, asks ‘What are words worth?’. It becomes clear from the context of
the survey that the ‘worth’ of words will only be measured in a monetary
sense, i.e. as economic value. Perhaps even more important to understand in
this context, however, is how this economic worth of words is measured and
determined by focusing on two fixed and predetermined entities in advance.
First of all, the study focuses on individual human agents of creativity (i.e.
creators contributing economic value): the value of writing is established by
collecting data and making measurements at the level of individual authorship,
addressing authors/writers as singular individuals throughout the survey.
Secondly, economic worth is further determined by focusing on the fixed and
stable creative objects authors produce, in other words the study establishes
from the outset a clear link between the worth and value of writing and
economic remuneration based on individual works of
writing.[18](ch3.xhtml#footnote-135) Therefore in this process of determining
the economic worth of words, ‘writers’ and/or ‘authors’ are described and
positioned in a certain way in this study (i.e. as the central agents and
originators of creative objects), as is the form their creativity takes in the
shape of quantifiable outputs or commodities. The value of both these units of
measurement (the creator and the creative objects) are then set off against
the growth of the creative industries in the press release.

The ALCS commissioned survey provides some important insights into how
authorship, cultural works and remuneration — and ultimately, creativity — is
currently valued, specifically in the context of the creative industries
discourse in the UK. What I have tried to point out — without wanting to
downplay the importance either of writers receiving fair remuneration for
their work or of issues related to the sustainability of creative processes —
is that the findings from this survey have both been extracted and
subsequently framed based on a very specific economic model of creativity (and
authorship). According to this model, writing and creativity are sustained
most clearly by an individual original creator (an author) who extracts value
from the work s/he creates and distributes, aided by an intellectual property
rights regime. As I will outline more in depth in what follows, the enduring
liberal and humanist presumptions that underlie this survey continuously
reinforce the links between the value of writing and established IP and
remuneration regimes, and support a vision in which authorship and creativity
are dependent on economic incentives and ownership of works. By working within
this framework and with these predetermined concepts of authorship and
creativity (and ‘the digital’) the ALCS is strongly committed to the upkeep of
a specific model and discourse of creativity connected to the creative
industries. The ALCS does not attempt to complicate this model, nor does it
search for alternatives even when, as the 2007 report already implies, the
existing IP model has empirically failed to support the remuneration of
writers appropriately.

I want to use this ALCS survey as a reference point to start problematising
existing constructions of creativity, authorship, ownership, and
sustainability in relation to the ethics of publishing. To explore what ‘words
are worth’ and to challenge the hegemonic liberal humanist model of creativity
— to which the ALCS adheres — I will examine a selection of theoretical and
practical publishing and writing alternatives, from relational and posthuman
authorship to radical open access and uncreative writing. These alternatives
do not deny the importance of fair remuneration and sustainability for the
creative process; however, they want to foreground and explore creative
relationalities that move beyond the individual author and her ownership of
creative objects as the only model to support creativity and cultural
exchange. By looking at alternatives while at the same time complicating the
values and assumptions underlying the dominant narrative for IP expansion, I
want to start imagining what more ethical, fair and emergent forms of
creativity might entail. Forms that take into consideration the various
distributed and entangled agencies involved in the creation of cultural
content — which are presently not being included in the ALCS survey on fair
remuneration, for example. As I will argue, a reconsideration of the liberal
and humanist model of creativity might actually create new possibilities to
consider the value of words, and with that perhaps new solutions to the
problems pointed out in the ALCS study.

## Relational and Distributed Authorship

One of the main critiques of the liberal humanist model of authorship concerns
how it privileges the author as the sole source and origin of creativity. Yet
the argument has been made, both from a historical perspective and in relation
to today’s networked digital environment, that authorship and creativity, and
with that the value and worth of that creativity, are heavily
distributed.[19](ch3.xhtml#footnote-134) Should we therefore think about how
we can distribute notions of authorship and creativity more ethically when
defining the worth and value of words too? Would this perhaps mean a more
thorough investigation of what and who the specific agencies involved in
creative production are? This seems all the more important given that, today,
‘the value of words’ is arguably connected not to (distributed) authors or
creative agencies, but to rights holders (or their intermediaries such as
agents).[20](ch3.xhtml#footnote-133) From this perspective, the problem with
the copyright model as it currently functions is that the creators of
copyright don’t necessarily end up benefiting from it — a point that was also
implied by the authors of the 2007 ALCS commissioned report. Copyright
benefits rights holders, and rights holders are not necessarily, and often not
at all, involved in the production of creative work.

Yet copyright and the work as object are knit tightly to the authorship
construct. In this respect, the above criticism notwithstanding, in a liberal
vision of creativity and ownership the typical unit remains either the author
or the work. This ‘solid and fundamental unit of the author and the work’ as
Foucault has qualified it, albeit challenged, still retains a privileged
position.[21](ch3.xhtml#footnote-132) As Mark Rose argues, authorship — as a
relatively recent cultural formation — can be directly connected to the
commodification of writing and to proprietorship. Even more it developed in
tandem with the societal principle of possessive individualism, in which
individual property rights are protected by the social
order.[22](ch3.xhtml#footnote-131)

Some of the more interesting recent critiques of these constructs of
authorship and proprietorship have come from critical and feminist legal
studies, where scholars such as Carys Craig have started to question these
connections further. As Craig, Turcotte and Coombe argue, IP and copyright are
premised on liberal and neoliberal assumptions and constructs, such as
ownership, private rights, self-interest and
individualism.[23](ch3.xhtml#footnote-130) In this sense copyright,
authorship, the work as object, and related discourses around creativity
continuously re-establish and strengthen each other as part of a self-
sustaining system. We have seen this with the discourse around creative
industries, as part of which economic value comes to stand in for the creative
process itself, which, according to this narrative, can only be sustained
through an IP regime. Furthermore, from a feminist new materialist position,
the current discourse on creativity is very much a material expression of
creativity rather than merely its representation, where this discourse has
been classifying, constructing, and situating creativity (and with that,
authorship) within a neoliberal framework of creative industries.

Moving away from an individual construct of creativity therefore immediately
affects the question of the value of words. In our current copyright model
emphasis lies on the individual original author, but in a more distributed
vision the value of words and of creative production can be connected to a
broader context of creative agencies. Historically there has been a great
discursive shift from a valuing of imitation or derivation to a valuing of
originality in determining what counts as creativity or creative output.
Similar to Rose, Craig, Turcotte and Coombe argue that the individuality and
originality of authorship in its modern form established a simple route
towards individual ownership and the propertisation of creative achievement:
the original work is the author’s ownership whereas the imitator or pirate is
a trespasser of thief. In this sense original authorship is
‘disproportionately valued against other forms of cultural expression and
creative play’, where copyright upholds, maintains and strengthens the binary
between imitator and creator — defined by Craig, Turcotte and Coombe as a
‘moral divide’.[24](ch3.xhtml#footnote-129) This also presupposes a notion of
creativity that sees individuals as autonomous, living in isolation from each
other, ignoring their relationality. Yet as Craig, Turcotte and Coombe argue,
‘the act of writing involves not origination, but rather the adaptation,
derivation, translation and recombination of “raw material” taken from
previously existing texts’.[25](ch3.xhtml#footnote-128) This position has also
been explored extensively from within remix studies and fan culture, where the
adaptation and remixing of cultural content stands at the basis of creativity
(what Lawrence Lessig has called Read/Write culture, opposed to Read/Only
culture).[26](ch3.xhtml#footnote-127) From the perspective of access to
culture — instead of ownership of cultural goods or objects — one could also
argue that its value would increase when we are able to freely distribute it
and with that to adapt and remix it to create new cultural content and with
that cultural and social value — this within a context in which, as Craig,
Turcotte and Coombe point out, ‘the continuous expansion of intellectual
property rights has produced legal regimes that restrict access and downstream
use of information resources far beyond what is required to encourage their
creation’[27](ch3.xhtml#footnote-126)

To move beyond Enlightenment ideals of individuation, detachment and unity of
author and work, which determine the author-owner in the copyright model,
Craig puts forward a post-structuralist vision of relational authorship. This
sees the individual as socially situated and constituted — based also on
feminist scholarship into the socially situated self — where authorship in
this vision is situated within the communities in which it exists, but also in
relation to the texts and discourses that constitute it. Here creativity takes
place from within a network of social relations and the social dimensions of
authorship are recognised, as connectivity goes hand in hand with individual
autonomy. Craig argues that copyright should not be defined out of clashing
rights and interests but should instead focus on the kinds of relationships
this right would structure; it should be understood in relational terms: ‘it
structures relationships between authors and users, allocating powers and
responsibilities amongst members of cultural communities, and establishing the
rules of communication and exchange’.[28](ch3.xhtml#footnote-125) Cultural
value is then defined within these relationships.

## Open Access and the Ethics of Care

Craig, Turcotte and Coombe draw a clear connection between relational
authorship, feminism and (the ideals of) the open access movement, where as
they state, ‘rather than adhering to the individuated form of authorship that
intellectual property laws presuppose, open access initiatives take into
account varying forms of collaboration, creativity and
development’.[29](ch3.xhtml#footnote-124) Yet as I and others have argued
elsewhere,[30](ch3.xhtml#footnote-123) open access or open access publishing
is not a solid ideological block or model; it is made up of disparate groups,
visions and ethics. In this sense there is nothing intrinsically political or
democratic about open access, practitioners of open access can just as well be
seen to support and encourage open access in connection with the neoliberal
knowledge economy, with possessive individualism — even with CC licenses,
which can be seen as strengthening individualism —[31](ch3.xhtml#footnote-122)
and with the unity of author and work.[32](ch3.xhtml#footnote-121)

Nevertheless, there are those within the loosely defined and connected
‘radical open access community’, that do envision their publishing outlook and
relationship towards copyright, openness and authorship within and as part of
a relational ethics of care.[33](ch3.xhtml#footnote-120) For example Mattering
Press, a scholar-led open access book publishing initiative founded in 2012
and launched in 2016, publishes in the field of Science and Technology Studies
(STS) and works with a production model based on cooperation and shared
scholarship. As part of its publishing politics, ethos and ideology, Mattering
Press is therefore keen to include various agencies involved in the production
of scholarship, including ‘authors, reviewers, editors, copy editors, proof
readers, typesetters, distributers, designers, web developers and
readers’.[34](ch3.xhtml#footnote-119) They work with two interrelated feminist
(new materialist) and STS concepts to structure and perform this ethos:
mattering[35](ch3.xhtml#footnote-118) and care.[36](ch3.xhtml#footnote-117)
Where it concerns mattering, Mattering Press is conscious of how their
experiment in knowledge production, being inherently situated, puts new
relationships and configurations into the world. What therefore matters for
them are not so much the ‘author’ or the ‘outcome’ (the object), but the
process and the relationships that make up publishing:

[…] the way academic texts are produced matters — both analytically and
politically. Dominant publishing practices work with assumptions about the
conditions of academic knowledge production that rarely reflect what goes on
in laboratories, field sites, university offices, libraries, and various
workshops and conferences. They tend to deal with almost complete manuscripts
and a small number of authors, who are greatly dependent on the politics of
the publishing industry.[37](ch3.xhtml#footnote-116)

For Mattering Press care is something that extends not only to authors but to
the many other actants involved in knowledge production, who often provide
free volunteer labour within a gift economy context. As Mattering Press
emphasises, the ethics of care ‘mark vital relations and practices whose value
cannot be calculated and thus often goes unacknowledged where logics of
calculation are dominant’.[38](ch3.xhtml#footnote-115) For Mattering Press,
care can help offset and engage with the calculative logic that permeates
academic publishing:

[…] the concept of care can help to engage with calculative logics, such as
those of costs, without granting them dominance. How do we calculate so that
calculations do not dominate our considerations? What would it be to care for
rather than to calculate the cost of a book? This is but one and arguably a
relatively conservative strategy for allowing other logics than those of
calculation to take centre stage in publishing.[39](ch3.xhtml#footnote-114)

This logic of care refers, in part, to making visible the ‘unseen others’ as
Joe Deville (one of Mattering Press’s editors) calls them, who exemplify the
plethora of hidden labour that goes unnoticed within this object and author-
focused (academic) publishing model. As Endre Danyi, another Mattering Press
editor, remarks, quoting Susan Leigh Star: ‘This is, in the end, a profoundly
political process, since so many forms of social control rely on the erasure
or silencing of various workers, on deleting their work from representations
of the work’.[40](ch3.xhtml#footnote-113)

## Posthuman Authorship

Authorship is also being reconsidered as a polyvocal and collaborative
endeavour by reflecting on the agentic role of technology in authoring
content. Within digital literature, hypertext and computer-generated poetry,
media studies scholars have explored the role played by technology and the
materiality of text in the creation process, where in many ways writing can be
seen as a shared act between reader, writer and computer. Lori Emerson
emphasises that machines, media or technology are not neutral in this respect,
which complicates the idea of human subjectivity. Emerson explores this
through the notion of ‘cyborg authorship’, which examines the relation between
machine and human with a focus on the potentiality of in-
betweenness.[41](ch3.xhtml#footnote-112) Dani Spinosa talks about
‘collaboration with an external force (the computer, MacProse, technology in
general)’.[42](ch3.xhtml#footnote-111) Extending from the author, the text
itself, and the reader as meaning-writer (and hence playing a part in the
author function), technology, she states, is a fourth term in this
collaborative meaning-making. As Spinosa argues, in computer-generated texts
the computer is more than a technological tool and becomes a co-producer,
where it can occur that ‘the poet herself merges with the machine in order to
place her own subjectivity in flux’.[43](ch3.xhtml#footnote-110) Emerson calls
this a ‘break from the model of the poet/writer as divinely inspired human
exemplar’, which is exemplified for her in hypertext, computer-generated
poetry, and digital poetry.[44](ch3.xhtml#footnote-109)

Yet in many ways, as Emerson and Spinosa also note, these forms of posthuman
authorship should be seen as part of a larger trend, what Rolf Hughes calls an
‘anti-authorship’ tradition focused on auto-poesis (self-making), generative
systems and automatic writing. As Hughes argues, we see this tradition in
print forms such as Oulipo and in Dada experiments and surrealist games
too.[45](ch3.xhtml#footnote-108) But there are connections here with broader
theories that focus on distributed agency too, especially where it concerns
the influence of the materiality of the text. Media theorists such as N.
Katherine Hayles and Johanna Drucker have extensively argued that the
materiality of the page is entangled with the intentionality of the author as
a further agency; Drucker conceptualises this through a focus on ‘conditional
texts’ and ‘performative materiality’ with respect to the agency of the
material medium (be it the printed page or the digital
screen).[46](ch3.xhtml#footnote-107)

Where, however, does the redistribution of value creation end in these
narratives? As Nick Montfort states with respect to the agency of technology,
‘should other important and inspirational mechanisms — my CD player, for
instance, and my bookshelves — get cut in on the action as
well?’[47](ch3.xhtml#footnote-106) These distributed forms of authorship do
not solve issues related to authorship or remuneration but further complicate
them. Nevertheless Montfort is interested in describing the processes involved
in these types of (posthuman) co-authorship, to explore the (previously
unexplored) relationships and processes involved in the authoring of texts
more clearly. As he states, this ‘can help us understand the role of the
different participants more fully’.[48](ch3.xhtml#footnote-105) In this
respect a focus on posthuman authorship and on the various distributed
agencies that play a part in creative processes is not only a means to disrupt
the hegemonic focus on a romantic single and original authorship model, but it
is also about a sensibility to (machinic) co-authorship, to the different
agencies involved in the creation of art, and playing a role in creativity
itself. As Emerson remarks in this respect: ‘we must be wary of granting a
(romantic) specialness to human intentionality — after all, the point of
dividing the responsibility for the creation of the poems between human and
machine is to disrupt the singularity of human identity, to force human
identity to intermingle with machine identity’.[49](ch3.xhtml#footnote-104)

## Emergent Creativity

This more relational notion of rights and the wider appreciation of the
various (posthuman) agencies involved in creative processes based on an ethics
of care, challenges the vision of the single individualised and original
author/owner who stands at the basis of our copyright and IP regime — a vision
that, it is worth emphasising, can be seen as a historical (and Western)
anomaly, where collaborative, anonymous, and more polyvocal models of
authorship have historically prevailed.[50](ch3.xhtml#footnote-103) The other
side of the Foucauldian double bind, i.e. the fixed cultural object that
functions as a commodity, has however been similarly critiqued from several
angles. As stated before, and as also apparent from the way the ALCS report
has been framed, currently our copyright and remuneration regime is based on
ownership of cultural objects. Yet as many have already made clear, this
regime and discourse is very much based on physical objects and on a print-
based context.[51](ch3.xhtml#footnote-102) As such the idea of ‘text’ (be it
print or digital) has not been sufficiently problematised as versioned,
processual and materially changing within an IP context. In other words, text
and works are mostly perceived as fixed and stable objects and commodities
instead of material and creative processes and entangled relationalities. As
Craig et al. state, ‘the copyright system is unfortunately employed to
reinforce the norms of the analog world’.[52](ch3.xhtml#footnote-101) In
contrast to a more relational perspective, the current copyright regime views
culture through a proprietary lens. And it is very much this discursive
positioning, or as Craig et al. argue ‘the language of “ownership,”
“property,” and “commodity”’, which ‘obfuscates the nature of copyright’s
subject matter, and cloaks the social and cultural conditions of its
production and the implications of its
protection’.[53](ch3.xhtml#footnote-100) How can we approach creativity in
context, as socially and culturally situated, and not as the free-standing,
stable product of a transcendent author, which is very much how it is being
positioned within an economic and copyright framework? This hegemonic
conception of creativity as property fails to acknowledge or take into
consideration the manifold, distributed, derivative and messy realities of
culture and creativity.

It is therefore important to put forward and promote another more emergent
vision of creativity, where creativity is seen as both processual and only
ever temporarily fixed, and where the work itself is seen as being the product
of a variety of (posthuman) agencies. Interestingly, someone who has written
very elaborately about a different form of creativity relevant to this context
is one of the authors of the ALCS commissioned report, Johanna Gibson. Similar
to Craig, who focuses on the relationality of copyright, Gibson wants to pay
more attention to the networking of creativity, moving it beyond a focus on
traditional models of producers and consumers in exchange for a ‘many-to-many’
model of creativity. For Gibson, IP as a system aligns with a corporate model
of creativity, one which oversimplifies what it means to be creative and
measures it against economic parameters alone.[54](ch3.xhtml#footnote-099) In
many ways in policy driven visions, IP has come to stand in for the creative
process itself, Gibson argues, and is assimilated within corporate models of
innovation. It has thus become a synonym for creativity, as we have seen in
the creative industries discourse. As Gibson explains, this simplified model
of creativity is very much a ‘discursive strategy’ in which the creator is
mythologised and output comes in the form of commodified
objects.[55](ch3.xhtml#footnote-098) In this sense we need to re-appropriate
creativity as an inherently fluid and uncertain concept and practice.

Yet this mimicry of creativity by IP and innovation at the same time means
that any re-appropriation of creativity from the stance of access and reuse is
targeted as anti-IP and thus as standing outside of formal creativity. Other,
more emergent forms of creativity have trouble existing within this self-
defining and sustaining hegemonic system. This is similar to what Craig
remarked with respect to remixed, counterfeit and pirated, and un-original
works, which are seen as standing outside the system. Gibson uses actor
network theory (ANT) as a framework to construct her network-based model of
creativity, where for her ANT allows for a vision that does not fix creativity
within a product, but focuses more on the material relationships and
interactions between users and producers. In this sense, she argues, a network
model allows for plural agencies to be attributed to creativity, including
those of users.[56](ch3.xhtml#footnote-097)

An interesting example of how the hegemonic object-based discourse of
creativity can be re-appropriated comes from the conceptual poet Kenneth
Goldsmith, who, in what could be seen as a direct response to this dominant
narrative, tries to emphasise that exactly what this discourse classifies as
‘uncreative’, should be seen as valuable in itself. Goldsmith points out that
appropriating is creative and that he uses it as a pedagogical method in his
classes on ‘Uncreative Writing’ (which he defines as ‘the art of managing
information and representing it as writing’[57](ch3.xhtml#footnote-096)). Here
‘uncreative writing’ is something to strive for and stealing, copying, and
patchwriting are elevated as important and valuable tools for writing. For
Goldsmith the digital environment has fostered new skills and notions of
writing beyond the print-based concepts of originality and authorship: next to
copying, editing, reusing and remixing texts, the management and manipulation
of information becomes an essential aspect of
creativity.[58](ch3.xhtml#footnote-095) Uncreative writing involves a
repurposing and appropriation of existing texts and works, which then become
materials or building blocks for further works. In this sense Goldsmith
critiques the idea of texts or works as being fixed when asking, ‘if artefacts
are always in flux, when is a historical work determined to be
“finished”?’[59](ch3.xhtml#footnote-094) At the same time, he argues, our
identities are also in flux and ever shifting, turning creative writing into a
post-identity literature.[60](ch3.xhtml#footnote-093) Machines play important
roles in uncreative writing, as active agents in the ‘managing of
information’, which is then again represented as writing, and is seen by
Goldsmith as a bridge between human-centred writing and full-blown
‘robopoetics’ (literature written by machines, for machines). Yet Goldsmith is
keen to emphasise that these forms of uncreative writing are not beholden to
the digital medium, and that pre-digital examples are plentiful in conceptual
literature and poetry. He points out — again by a discursive re-appropriation
of what creativity is or can be — that sampling, remixing and appropriation
have been the norm in other artistic and creative media for decades. The
literary world is lagging behind in this respect, where, despite the
experiments by modernist writers, it continues neatly to delineate avant-garde
from more general forms of writing. Yet as Goldsmith argues the digital has
started to disrupt this distinction again, moving beyond ‘analogue’ notions of
writing, and has fuelled with it the idea that there might be alternative
notions of writing: those currently perceived as
uncreative.[61](ch3.xhtml#footnote-092)

## Conclusion

There are two addendums to the argument I have outlined above that I would
like to include here. First of all, I would like to complicate and further
critique some of the preconceptions still inherent in the relational and
networked copyright models as put forward by Craig et al. and Gibson. Both are
in many ways reformist and ‘responsive’ models. Gibson, for example, does not
want to do away with IP rights, she wants them to develop and adapt to mirror
society more accurately according to a networked model of creativity. For her,
the law is out of tune with its public, and she wants to promote a more
inclusive networked (copy) rights model.[62](ch3.xhtml#footnote-091) For Craig
too, relationalities are established and structured by rights first and
foremost. Yet from a posthuman perspective we need to be conscious of how the
other actants involved in creativity would fall outside such a humanist and
subjective rights model.[63](ch3.xhtml#footnote-090) From texts and
technologies themselves to the wider environmental context and to other
nonhuman entities and objects: in what sense will a copyright model be able to
extend such a network beyond an individualised liberal humanist human subject?
What do these models exclude in this respect and in what sense are they still
limited by their adherence to a rights model that continues to rely on
humanist nodes in a networked or relational model? As Anna Munster has argued
in a talk about the case of the monkey selfie, copyright is based on a logic
of exclusion that does not line up with the assemblages of agentic processes
that make up creativity and creative expression.[64](ch3.xhtml#footnote-089)
How can we appreciate the relational and processual aspects of identity, which
both Craig and Gibson seem to want to promote, if we hold on to an inherently
humanist concept of subjectification, rights and creativity?

Secondly, I want to highlight that we need to remain cautious of a movement
away from copyright and the copyright industries, to a context of free culture
in which free content — and the often free labour it is based upon — ends up
servicing the content industries (i.e. Facebook, Google, Amazon). We must be
wary when access or the narrative around (open) access becomes dominated by
access to or for big business, benefitting the creative industries and the
knowledge economy. The danger of updating and adapting IP law to fit a
changing digital context and to new technologies, of making it more inclusive
in this sense — which is something both Craig and Gibson want to do as part of
their reformative models — is that this tends to be based on a very simplified
and deterministic vision of technology, as something requiring access and an
open market to foster innovation. As Sarah Kember argues, this technocratic
rationale, which is what unites pro-and anti-copyright activists in this
sense, essentially de-politicises the debate around IP; it is still a question
of determining the value of creativity through an economic perspective, based
on a calculative lobby.[65](ch3.xhtml#footnote-088) The challenge here is to
redefine the discourse in such a way that our focus moves away from a dominant
market vision, and — as Gibson and Craig have also tried to do — to emphasise
a non-calculative ethics of relations, processes and care instead.

I would like to return at this point to the ALCS report and the way its
results have been framed within a creative industries discourse.
Notwithstanding the fact that fair remuneration and incentives for literary
production and creativity in general are of the utmost importance, what I have
tried to argue here is that the ‘solution’ proposed by the ALCS does not do
justice to the complexities of creativity. When discussing remuneration of
authors, the ALCS seems to prefer a simple solution in which copyright is seen
as a given, the digital is pointed out as a generalised scapegoat, and
binaries between print and digital are maintained and strengthened.
Furthermore, fair remuneration is encapsulated by the ALCS within an economic
calculative logic and rhetoric, sustained by and connected to a creative
industries discourse, which continuously recreates the idea that creativity
and innovation are one. Instead I have tried to put forward various
alternative visions and practices, from radical open access to posthuman
authorship and uncreative writing, based on vital relationships and on an
ethics of care and responsibility. These alternatives highlight distributed
and relational authorship and/or showcase a sensibility that embraces
posthuman agencies and processual publishing as part of a more complex,
emergent vision of creativity, open to different ideas of what creativity is
and can become. In this vision creativity is thus seen as relational, fluid
and processual and only ever temporarily fixed as part of our ethical decision
making: a decision-making process that is contingent on the contexts and
relationships with which we find ourselves entangled. This involves asking
questions about what writing is and does, and how creativity expands beyond
our established, static, or given concepts, which include copyright and a
focus on the author as a ‘homo economicus’, writing as inherently an
enterprise, and culture as commodified. As I have argued, the value of words,
indeed the economic worth and sustainability of words and of the ‘creative
industries’, can and should be defined within a different narrative. Opening
up from the hegemonic creative industries discourse and the way we perform it
through our writing practices might therefore enable us to explore extended
relationalities of emergent creativity, open-ended publishing processes, and a
feminist ethics of care and responsibility.

This contribution has showcased examples of experimental, hybrid and posthuman
writing and publishing practices that are intervening in this established
discourse on creativity. How, through them, can we start to performatively
explore a new discourse and reconfigure the relationships that underlie our
writing processes? How can the worth of writing be reflected in different
ways?

## Works Cited

(2014) ‘New Research into Authors’ Earnings Released’, Authors’ Licensing and
Collecting Society,
Us/News/News/What-are-words-worth-now-not-much.aspx>

Abrahamsson, Sebastian, Uli Beisel, Endre Danyi, Joe Deville, Julien McHardy,
and Michaela Spencer (2013) ‘Mattering Press: New Forms of Care for STS
Books’, The EASST Review 32.4, volume-32-4-december-2013/mattering-press-new-forms-of-care-for-sts-books/>

Adema, Janneke (2017) ‘Cut-Up’, in Eduardo Navas (ed.), Keywords in Remix
Studies (New York and London: Routledge), pp. 104–14,


— (2014) ‘Embracing Messiness’, LSE Impact of Social Sciences,
adema-pdsc14/>

— (2015) ‘Knowledge Production Beyond The Book? Performing the Scholarly
Monograph in Contemporary Digital Culture’ (PhD dissertation, Coventry
University), f4c62c77ac86/1/ademacomb.pdf>

— (2014) ‘Open Access’, in Critical Keywords for the Digital Humanities
(Lueneburg: Centre for Digital Cultures (CDC)),


— and Gary Hall (2013) ‘The Political Nature of the Book: On Artists’ Books
and Radical Open Access’, New Formations 78.1, 138–56,


— and Samuel Moore (2018) ‘Collectivity and Collaboration: Imagining New Forms
of Communality to Create Resilience in Scholar-Led Publishing’, Insights 31.3,


ALCS, Press Release (8 July 2014) ‘What Are Words Worth Now? Not Enough’,


Barad, Karen (2007) Meeting the Universe Halfway: Quantum Physics and the
Entanglement of Matter and Meaning (Durham, N.C., and London: Duke University
Press).

Boon, Marcus (2010) In Praise of Copying (Cambridge, MA: Harvard University
Press).

Brown, Wendy (2015) Undoing the Demos: Neoliberalism’s Stealth Revolution
(Cambridge, MA: MIT Press).

Chartier, Roger (1994) The Order of Books: Readers, Authors, and Libraries in
Europe Between the 14th and 18th Centuries, 1st ed. (Stanford, CA: Stanford
University Press).

Craig, Carys J. (2011) Copyright, Communication and Culture: Towards a
Relational Theory of Copyright Law (Cheltenham, UK, and Northampton, MA:
Edward Elgar Publishing).

— Joseph F. Turcotte, and Rosemary J. Coombe (2011) ‘What’s Feminist About
Open Access? A Relational Approach to Copyright in the Academy’, Feminists@law
1.1,

Cramer, Florian (2013) Anti-Media: Ephemera on Speculative Arts (Rotterdam and
New York, NY: nai010 publishers).

Drucker, Johanna (2015) ‘Humanist Computing at the End of the Individual Voice
and the Authoritative Text’, in Patrik Svensson and David Theo Goldberg
(eds.), Between Humanities and the Digital (Cambridge, MA: MIT Press), pp.
83–94.

— (2014) ‘Distributed and Conditional Documents: Conceptualizing
Bibliographical Alterities’, MATLIT: Revista do Programa de Doutoramento em
Materialidades da Literatura 2.1, 11–29.

— (2013) ‘Performative Materiality and Theoretical Approaches to Interface’,
Digital Humanities Quarterly 7.1 [n.p.],


Ede, Lisa, and Andrea A. Lunsford (2001) ‘Collaboration and Concepts of
Authorship’, PMLA 116.2, 354–69.

Emerson, Lori (2008) ‘Materiality, Intentionality, and the Computer-Generated
Poem: Reading Walter Benn Michaels with Erin Moureacute’s Pillage Land’, ESC:
English Studies in Canada 34, 45–69.

— (2003) ‘Digital Poetry as Reflexive Embodiment’, in Markku Eskelinen, Raine
Koskimaa, Loss Pequeño Glazier and John Cayley (eds.), CyberText Yearbook
2002–2003, 88–106,

Foucault, Michel, ‘What Is an Author?’ (1998) in James D. Faubion (ed.),
Essential Works of Foucault, 1954–1984, Volume Two: Aesthetics, Method, and
Epistemology (New York: The New Press).

Gibson, Johanna (2007) Creating Selves: Intellectual Property and the
Narration of Culture (Aldershot, England and Burlington, VT: Routledge).

— Phillip Johnson and Gaetano Dimita (2015) The Business of Being an Author: A
Survey of Author’s Earnings and Contracts (London: Queen Mary University of
London), [https://orca.cf.ac.uk/72431/1/Final Report - For Web
Publication.pdf](https://orca.cf.ac.uk/72431/1/Final%20Report%20-%20For%20Web%20Publication.pdf)

Goldsmith, Kenneth (2011) Uncreative Writing: Managing Language in the Digital
Age (New York: Columbia University Press).

Hall, Gary (2010) ‘Radical Open Access in the Humanities’ (presented at the
Research Without Borders, Columbia University),
humanities/>

— (2008) Digitize This Book!: The Politics of New Media, or Why We Need Open
Access Now (Minneapolis, MN: University of Minnesota Press).

Hayles, N. Katherine (2004) ‘Print Is Flat, Code Is Deep: The Importance of
Media-Specific Analysis’, Poetics Today 25.1, 67–90,


Hughes, Rolf (2005) ‘Orderly Disorder: Post-Human Creativity’, in Proceedings
of the Linköping Electronic Conference (Linköpings universitet: University
Electronic Press).

Jenkins, Henry, and Owen Gallagher (2008) ‘“What Is Remix Culture?”: An
Interview with Total Recut’s Owen Gallagher’, Confessions of an Aca-Fan,


Johns, Adrian (1998) The Nature of the Book: Print and Knowledge in the Making
(Chicago, IL: University of Chicago Press).

Kember, Sarah (2016) ‘Why Publish?’, Learned Publishing 29, 348–53,


— (2014) ‘Why Write?: Feminism, Publishing and the Politics of Communication’,
New Formations: A Journal of Culture/Theory/Politics 83.1, 99–116.

Kretschmer, M., and P. Hardwick (2007) Authors’ Earnings from Copyright and
Non-Copyright Sources : A Survey of 25,000 British and German Writers (Poole,
UK: CIPPM/ALCS Bournemouth University),
[https://microsites.bournemouth.ac.uk/cippm/files/2007/07/ALCS-Full-
report.pdf](https://microsites.bournemouth.ac.uk/cippm/files/2007/07/ACLS-
Full-report.pdf)

Lessig, Lawrence (2008) Remix: Making Art and Commerce Thrive in the Hybrid
Economy (New York: Penguin Press).

Lovink, Geert, and Ned Rossiter (eds.) (2007) MyCreativity Reader: A Critique
of Creative Industries (Amsterdam: Institute of Network Cultures),


McGann, Jerome J. (1992) A Critique of Modern Textual Criticism
(Charlottesville, VA: University of Virginia Press).

McHardy, Julien (2014) ‘Why Books Matter: There Is Value in What Cannot Be
Evaluated.’, Impact of Social Sciences [n.p.],


Mol, Annemarie (2008) The Logic of Care: Health and the Problem of Patient
Choice, 1st ed. (London and New York: Routledge).

Montfort, Nick (2003) ‘The Coding and Execution of the Author’, in Markku
Eskelinen, Raine Kosimaa, Loss Pequeño Glazier and John Cayley (eds.),
CyberText Yearbook 2002–2003, 2003, 201–17,
, pp. 201–17.

Moore, Samuel A. (2017) ‘A Genealogy of Open Access: Negotiations between
Openness and Access to Research’, Revue Française des Sciences de
l’information et de la Communication 11,

Munster, Anna (2016) ‘Techno-Animalities — the Case of the Monkey Selfie’
(presented at the Goldsmiths University, London),


Navas, Eduardo (2012) Remix Theory: The Aesthetics of Sampling (Vienna and New
York: Springer).

Parikka, Jussi, and Mercedes Bunz (11 July 2014) ‘A Mini-Interview: Mercedes
Bunz Explains Meson Press’, Machinology,
meson-press/>

Richards, Victoria (7 January 2016) ‘Monkey Selfie: Judge Rules Macaque Who
Took Grinning Photograph of Himself “Cannot Own Copyright”’, The Independent,
macaque-who-took-grinning-photograph-of-himself-cannot-own-
copyright-a6800471.html>

Robbins, Sarah (2003) ‘Distributed Authorship: A Feminist Case-Study Framework
for Studying Intellectual Property’, College English 66.2, 155–71,


Rose, Mark (1993) Authors and Owners: The Invention of Copyright (Cambridge,
MA: Harvard University Press).

Spinosa, Dani (14 May 2014) ‘“My Line (Article) Has Sighed”: Authorial
Subjectivity and Technology’, Generic Pronoun,


Star, Susan Leigh (1991) ‘The Sociology of the Invisible: The Primacy of Work
in the Writings of Anselm Strauss’, in Anselm Leonard Strauss and David R.
Maines (eds.), Social Organization and Social Process: Essays in Honor of
Anselm Strauss (New York: A. de Grutyer).

* * *

[1](ch3.xhtml#footnote-152-backlink) The Authors’ Licensing and Collecting
Society is a [British](https://en.wikipedia.org/wiki/United_Kingdom)
membership organisation for writers, established in 1977 with over 87,000
members, focused on protecting and promoting authors’ rights. ALCS collects
and pays out money due to members for secondary uses of their work (copying,
broadcasting, recording etc.).

[2](ch3.xhtml#footnote-151-backlink) This survey was an update of an earlier
survey conducted in 2006 by the Centre of Intellectual Property Policy and
Management (CIPPM) at Bournemouth University.

[3](ch3.xhtml#footnote-150-backlink) ‘New Research into Authors’ Earnings
Released’, Authors’ Licensing and Collecting Society, 2014,
Us/News/News/What-are-words-worth-now-not-much.aspx>

[4](ch3.xhtml#footnote-149-backlink) Johanna Gibson, Phillip Johnson, and
Gaetano Dimita, The Business of Being an Author: A Survey of Author’s Earnings
and Contracts (London: Queen Mary University of London, 2015), p. 9,
[https://orca.cf.ac.uk/72431/1/Final Report - For Web Publication.pdf
](https://orca.cf.ac.uk/72431/1/Final%20Report%20-%20For%20Web%20Publication.pdf)

[5](ch3.xhtml#footnote-148-backlink) ALCS, Press Release. What Are Words Worth
Now? Not Enough, 8 July 2014, worth-now-not-enough>

[6](ch3.xhtml#footnote-147-backlink) Gibson, Johnson, and Dimita, The Business
of Being an Author, p. 35.

[7](ch3.xhtml#footnote-146-backlink) M. Kretschmer and P. Hardwick, Authors’
Earnings from Copyright and Non-Copyright Sources: A Survey of 25,000 British
and German Writers (Poole: CIPPM/ALCS Bournemouth University, 2007), p. 3,
[https://microsites.bournemouth.ac.uk/cippm/files/2007/07/ALCS-Full-
report.pdf](https://microsites.bournemouth.ac.uk/cippm/files/2007/07/ACLS-
Full-report.pdf)

[8](ch3.xhtml#footnote-145-backlink) ALCS, Press Release, 8 July 2014,
[https://www.alcs.co.uk/news/what-are-words-](https://www.alcs.co.uk/news
/what-are-words-worth-now-not-enough)
worth-now-not-enough

[9](ch3.xhtml#footnote-144-backlink) Gibson, Johnson, and Dimita, The Business
of Being an Author, p. 35.

[10](ch3.xhtml#footnote-143-backlink) Ibid.

[11](ch3.xhtml#footnote-142-backlink) In the survey, three questions that
focus on various sources of remuneration do list digital publishing and/or
online uses as an option (questions 8, 11, and 15). Yet the data tables
provided in the appendix to the report do not provide the findings for
questions 11 and 15 nor do they differentiate according to type of media for
other tables related to remuneration. The only data table we find in the
report related to digital publishing is table 3.3, which lists ‘Earnings
ranked (1 to 7) in relation to categories of work’, where digital publishing
ranks third after books and magazines/periodicals, but before newspapers,
audio/audio-visual productions and theatre. This lack of focus on the effect
of digital publishing on writers’ incomes, for a survey that is ‘the first to
capture the impact of the digital revolution on writers’ working lives’, is
quite remarkable. Gibson, Johnson, and Dimita, The Business of Being an
Author, Appendix 2.

[12](ch3.xhtml#footnote-141-backlink) Ibid., p. 35.

[13](ch3.xhtml#footnote-140-backlink) Ibid.

[14](ch3.xhtml#footnote-139-backlink) Geert Lovink and Ned Rossiter (eds.),
MyCreativity Reader: A Critique of Creative Industries (Amsterdam: Institute
of Network Cultures, 2007), p. 14,


[15](ch3.xhtml#footnote-138-backlink) See:
estimates-january-2015/creative-industries-economic-estimates-january-2015
-key-findings>

[16](ch3.xhtml#footnote-137-backlink) Wendy Brown, Undoing the Demos:
Neoliberalism’s Stealth Revolution (Cambridge, MA: MIT Press, 2015), p. 31.

[17](ch3.xhtml#footnote-136-backlink) Therefore Lovink and Rossiter make a
plea to, ‘redefine creative industries outside of IP generation’. Lovink and
Rossiter, MyCreativity Reader, p. 14.

[18](ch3.xhtml#footnote-135-backlink) Next to earnings made from writing more
in general, the survey on various occasions asks questions about earnings
arising from specific categories of works and related to the amount of works
exploited (published/broadcast) during certain periods. Gibson, Johnson, and
Dimita, The Business of Being an Author, Appendix 2.

[19](ch3.xhtml#footnote-134-backlink) Roger Chartier, The Order of Books:
Readers, Authors, and Libraries in Europe Between the 14th and 18th Centuries,
1st ed. (Stanford: Stanford University Press, 1994); Lisa Ede and Andrea A.
Lunsford, ‘Collaboration and Concepts of Authorship’, PMLA 116.2 (2001),
354–69; Adrian Johns, The Nature of the Book: Print and Knowledge in the
Making (Chicago, IL: University of Chicago Press, 1998); Jerome J. McGann, A
Critique of Modern Textual Criticism (Charlottesville, VA, University of
Virginia Press, 1992); Sarah Robbins, ‘Distributed Authorship: A Feminist
Case-Study Framework for Studying Intellectual Property’, College English 66.2
(2003), 155–71,

[20](ch3.xhtml#footnote-133-backlink) The ALCS survey addresses this problem,
of course, and tries to lobby on behalf of its authors for fair contracts with
publishers and intermediaries. That said, the survey findings show that only
42% of writers always retain their copyright. Gibson, Johnson, and Dimita, The
Business of Being an Author, p. 12.

[21](ch3.xhtml#footnote-132-backlink) Michel Foucault, ‘What Is an Author?’,
in James D. Faubion (ed.), Essential Works of Foucault, 1954–1984, Volume Two:
Aesthetics, Method, and Epistemology (New York: The New Press, 1998), p. 205.

[22](ch3.xhtml#footnote-131-backlink) Mark Rose, Authors and Owners: The
Invention of Copyright (Cambridge, MA: Harvard University Press, 1993).

[23](ch3.xhtml#footnote-130-backlink) Carys J. Craig, Joseph F. Turcotte, and
Rosemary J. Coombe, ‘What’s Feminist About Open Access? A Relational Approach
to Copyright in the Academy’, Feminists@law 1.1 (2011),


[24](ch3.xhtml#footnote-129-backlink) Ibid., p. 8.

[25](ch3.xhtml#footnote-128-backlink) Ibid., p. 9.

[26](ch3.xhtml#footnote-127-backlink) Lawrence Lessig, Remix: Making Art and
Commerce Thrive in the Hybrid Economy (New York: Penguin Press, 2008); Eduardo
Navas, Remix Theory: The Aesthetics of Sampling (Vienna and New York:
Springer, 2012); Henry Jenkins and Owen Gallagher, ‘“What Is Remix Culture?”:
An Interview with Total Recut’s Owen Gallagher’, Confessions of an Aca-Fan,
2008,

[27](ch3.xhtml#footnote-126-backlink) Craig, Turcotte, and Coombe, ‘What’s
Feminist About Open Access?, p. 27.

[28](ch3.xhtml#footnote-125-backlink) Ibid., p. 14.

[29](ch3.xhtml#footnote-124-backlink) Ibid., p. 26.

[30](ch3.xhtml#footnote-123-backlink) Janneke Adema, ‘Open Access’, in
Critical Keywords for the Digital Humanities (Lueneburg: Centre for Digital
Cultures (CDC), 2014), ; Janneke Adema,
‘Embracing Messiness’, LSE Impact of Social Sciences, 2014,
adema-pdsc14/>; Gary Hall, Digitize This Book!: The Politics of New Media, or
Why We Need Open Access Now (Minneapolis, MN: University of Minnesota Press,
2008), p. 197; Sarah Kember, ‘Why Write?: Feminism, Publishing and the
Politics of Communication’, New Formations: A Journal of
Culture/Theory/Politics 83.1 (2014), 99–116; Samuel A. Moore, ‘A Genealogy of
Open Access: Negotiations between Openness and Access to Research’, Revue
Française des Sciences de l’information et de la Communication, 2017,


[31](ch3.xhtml#footnote-122-backlink) Florian Cramer, Anti-Media: Ephemera on
Speculative Arts (Rotterdam and New York: nai010 publishers, 2013).

[32](ch3.xhtml#footnote-121-backlink) Especially within humanities publishing
there is a reluctance to allow derivative uses of one’s work in an open access
setting.

[33](ch3.xhtml#footnote-120-backlink) In 2015 the Radical Open Access
Conference took place at Coventry University, which brought together a large
array of presses and publishing initiatives (often academic-led) in support of
an ‘alternative’ vision of open access and scholarly communication.
Participants in this conference subsequently formed the loosely allied Radical
Open Access Collective: [radicaloa.co.uk](https://radicaloa.co.uk/). As the
conference concept outlines, radical open access entails ‘a vision of open
access that is characterised by a spirit of on-going creative experimentation,
and a willingness to subject some of our most established scholarly
communication and publishing practices, together with the institutions that
sustain them (the library, publishing house etc.), to rigorous critique.
Included in the latter will be the asking of important questions about our
notions of authorship, authority, originality, quality, credibility,
sustainability, intellectual property, fixity and the book — questions that
lie at the heart of what scholarship is and what the university can be in the
21st century’. Janneke Adema and Gary Hall, ‘The Political Nature of the Book:
On Artists’ Books and Radical Open Access’, New Formations 78.1 (2013),
138–56, ; Janneke Adema and Samuel
Moore, ‘Collectivity and Collaboration: Imagining New Forms of Communality to
Create Resilience In Scholar-Led Publishing’, Insights 31.3 (2018),
; Gary Hall, ‘Radical Open Access in the
Humanities’ (presented at the Research Without Borders, Columbia University,
2010), humanities/>; Janneke Adema, ‘Knowledge Production Beyond The Book? Performing
the Scholarly Monograph in Contemporary Digital Culture’ (PhD dissertation,
Coventry University, 2015),
f4c62c77ac86/1/ademacomb.pdf>

[34](ch3.xhtml#footnote-119-backlink) Julien McHardy, ‘Why Books Matter: There
Is Value in What Cannot Be Evaluated’, Impact of Social Sciences, 2014, n.p.,
[http://blogs.lse.ac.uk/impactofsocial sciences/2014/09/30/why-books-
matter/](http://blogs.lse.ac.uk/impactofsocialsciences/2014/09/30/why-books-
matter/)

[35](ch3.xhtml#footnote-118-backlink) Karen Barad, Meeting the Universe
Halfway: Quantum Physics and the Entanglement of Matter and Meaning (Durham,
N.C. and London: Duke University Press, 2007).

[36](ch3.xhtml#footnote-117-backlink) Annemarie Mol, The Logic of Care: Health
and the Problem of Patient Choice, 1st ed. (London and New York: Routledge,
2008).

[37](ch3.xhtml#footnote-116-backlink) Sebastian Abrahamsson and others,
‘Mattering Press: New Forms of Care for STS Books’, The EASST Review 32.4
(2013), press-new-forms-of-care-for-sts-books/>

[38](ch3.xhtml#footnote-115-backlink) McHardy, ‘Why Books Matter’.

[39](ch3.xhtml#footnote-114-backlink) Ibid.

[40](ch3.xhtml#footnote-113-backlink) Susan Leigh Star, ‘The Sociology of the
Invisible: The Primacy of Work in the Writings of Anselm Strauss’, in Anselm
Leonard Strauss and David R. Maines (eds.), Social Organization and Social
Process: Essays in Honor of Anselm Strauss (New York: A. de Gruyter, 1991).
Mattering Press is not alone in exploring an ethics of care in relation to
(academic) publishing. Sarah Kember, director of Goldsmiths Press is also
adamant in her desire to make the underlying processes of publishing (i.e.
peer review, citation practices) more transparent and accountable Sarah
Kember, ‘Why Publish?’, Learned Publishing 29 (2016), 348–53,
. Mercedes Bunz, one of the editors running
Meson Press, argues that a sociology of the invisible would incorporate
‘infrastructure work’, the work of accounting for, and literally crediting
everybody involved in producing a book: ‘A book isn’t just a product that
starts a dialogue between author and reader. It is accompanied by lots of
other academic conversations — peer review, co-authors, copy editors — and
these conversations deserve to be taken more serious’. Jussi Parikka and
Mercedes Bunz, ‘A Mini-Interview: Mercedes Bunz Explains Meson Press’,
Machinology, 2014, mercedes-bunz-explains-meson-press/>. For Open Humanities Press authorship is
collaborative and even often anonymous: for example, they are experimenting
with research published in wikis to further complicate the focus on single
authorship and a static marketable book object within academia (see their
living and liquid books series).

[41](ch3.xhtml#footnote-112-backlink) Lori Emerson, ‘Digital Poetry as
Reflexive Embodiment’, in Markku Eskelinen, Raine Koskimaa, Loss Pequeño
Glazier and John Cayley (eds.), CyberText Yearbook 2002–2003, 2003, 88–106,


[42](ch3.xhtml#footnote-111-backlink) Dani Spinosa, ‘“My Line (Article) Has
Sighed”: Authorial Subjectivity and Technology’, Generic Pronoun, 2014,


[43](ch3.xhtml#footnote-110-backlink) Spinosa, ‘My Line (Article) Has Sighed’.

[44](ch3.xhtml#footnote-109-backlink) Emerson, ‘Digital Poetry as Reflexive
Embodiment’, p. 89.

[45](ch3.xhtml#footnote-108-backlink) Rolf Hughes, ‘Orderly Disorder: Post-
Human Creativity’, in Proceedings of the Linköping Electronic Conference
(Linköpings universitet: University Electronic Press, 2005).

[46](ch3.xhtml#footnote-107-backlink) N. Katherine Hayles, ‘Print Is Flat,
Code Is Deep: The Importance of Media-Specific Analysis’, Poetics Today 25.1
(2004), 67–90, ; Johanna Drucker,
‘Performative Materiality and Theoretical Approaches to Interface’, Digital
Humanities Quarterly 7.1 (2013),
; Johanna
Drucker, ‘Distributed and Conditional Documents: Conceptualizing
Bibliographical Alterities’, MATLIT: Revista do Programa de Doutoramento em
Materialidades da Literatura 2.1 (2014), 11–29.

[47](ch3.xhtml#footnote-106-backlink) Nick Montfort, ‘The Coding and Execution
of the Author’, in Markku Eskelinen, Raine Kosimaa, Loss Pequeño Glazier and
John Cayley (eds.), CyberText Yearbook 2002–2003, 2003, 201–17 (p. 201),


[48](ch3.xhtml#footnote-105-backlink) Montfort, ‘The Coding and Execution of
the Author’, p. 202.

[49](ch3.xhtml#footnote-104-backlink) Lori Emerson, ‘Materiality,
Intentionality, and the Computer-Generated Poem: Reading Walter Benn Michaels
with Erin Moureacute’s Pillage Land’, ESC: English Studies in Canada 34
(2008), 66.

[50](ch3.xhtml#footnote-103-backlink) Marcus Boon, In Praise of Copying
(Cambridge, MA: Harvard University Press, 2010); Johanna Drucker, ‘Humanist
Computing at the End of the Individual Voice and the Authoritative Text’, in
Patrik Svensson and David Theo Goldberg (eds.), Between Humanities and the
Digital (Cambridge, MA: MIT Press, 2015), pp. 83–94.

[51](ch3.xhtml#footnote-102-backlink) We have to take into consideration here
that print-based cultural products were never fixed or static; the dominant
discourses constructed around them just perceive them to be so.

[52](ch3.xhtml#footnote-101-backlink) Craig, Turcotte, and Coombe, ‘What’s
Feminist About Open Access?’, p. 2.

[53](ch3.xhtml#footnote-100-backlink) Ibid.

[54](ch3.xhtml#footnote-099-backlink) Johanna Gibson, Creating Selves:
Intellectual Property and the Narration of Culture (Aldershot, UK, and
Burlington: Routledge, 2007), p. 7.

[55](ch3.xhtml#footnote-098-backlink) Gibson, Creating Selves, p. 7.

[56](ch3.xhtml#footnote-097-backlink) Ibid.

[57](ch3.xhtml#footnote-096-backlink) Kenneth Goldsmith, Uncreative Writing:
Managing Language in the Digital Age (New York: Columbia University Press,
2011), p. 227.

[58](ch3.xhtml#footnote-095-backlink) Ibid., p. 15.

[59](ch3.xhtml#footnote-094-backlink) Goldsmith, Uncreative Writing, p. 81.

[60](ch3.xhtml#footnote-093-backlink) Ibid.

[61](ch3.xhtml#footnote-092-backlink) It is worth emphasising that what
Goldsmith perceives as ‘uncreative’ notions of writing (including
appropriation, pastiche, and copying), have a prehistory that can be traced
back to antiquity (thanks go out to this chapter’s reviewer for pointing this
out). One example of this, which uses the method of cutting and pasting —
something I have outlined more in depth elsewhere — concerns the early modern
commonplace book. Commonplacing as ‘a method or approach to reading and
writing involved the gathering and repurposing of meaningful quotes, passages
or other clippings from published books by copying and/or pasting them into a
blank book.’ Janneke Adema, ‘Cut-Up’, in Eduardo Navas (ed.), Keywords in
Remix Studies (New York and London: Routledge, 2017), pp. 104–14,


[62](ch3.xhtml#footnote-091-backlink) Gibson, Creating Selves, p. 27.

[63](ch3.xhtml#footnote-090-backlink) For example, animals cannot own
copyright. See the case of Naruto, the macaque monkey that took a ‘selfie’
photograph of itself. Victoria Richards, ‘Monkey Selfie: Judge Rules Macaque
Who Took Grinning Photograph of Himself “Cannot Own Copyright”’, The
Independent, 7 January 2016, /monkey-selfie-judge-rules-macaque-who-took-grinning-photograph-of-himself-
cannot-own-copyright-a6800471.html>

[64](ch3.xhtml#footnote-089-backlink) Anna Munster, ‘Techno-Animalities — the
Case of the Monkey Selfie’ (presented at the Goldsmiths University, London,
2016),

[65](ch3.xhtml#footnote-088-backlink) Sarah Kember, ‘Why Write?: Feminism,
Publishing and the Politics of Communication’, New Formations: A Journal of
Culture/Theory/Politics 83.1 (2014), 99–116.

Thylstrup
The Politics of Mass Digitization
2019


The Politics of Mass Digitization

Nanna Bonde Thylstrup

The MIT Press

Cambridge, Massachusetts

London, England

# Table of Contents

1. Acknowledgments
2. I Framing Mass Digitization
1. 1 Understanding Mass Digitization
3. II Mapping Mass Digitization
1. 2 The Trials, Tribulations, and Transformations of Google Books
2. 3 Sovereign Soul Searching: The Politics of Europeana
3. 4 The Licit and Illicit Nature of Mass Digitization
4. III Diagnosing Mass Digitization
1. 5 Lost in Mass Digitization
2. 6 Concluding Remarks
5. References
6. Index

## List of figures

1. Figure 2.1 François-Marie Lefevere and Marin Saric. “Detection of grooves in scanned images.” U.S. Patent 7508978B1. Assigned to Google LLC.
2. Figure 2.2 Joseph K. O’Sullivan, Alexander Proudfooot, and Christopher R. Uhlik. “Pacing and error monitoring of manual page turning operator.” U.S. Patent 7619784B1. Assigned to Google LLC, Google Technology Holdings LLC.

#
Acknowledgments

I am very grateful to all those who have contributed to this book in various
ways. I owe special thanks to Bjarki Valtysson, Frederik Tygstrup, and Peter
Duelund, for their supervision and help thinking through this project, its
questions, and its forms. I also wish to thank Andrew Prescott, Tobias Olsson,
and Rune Gade for making my dissertation defense a memorable and thoroughly
enjoyable day of constructive critique and lively discussions. Important parts
of the research for this book further took place during three visiting stays
at Cornell University, Duke University, and Columbia University. I am very
grateful to N. Katherine Hayles, Andreas Huyssen, Timothy Brennan, Lydia
Goehr, Rodney Benson, and Fredric Jameson, who generously welcomed me across
the Atlantic and provided me with invaluable new perspectives, as well as
theoretical insights and challenges. Beyond the aforementioned, three people
in particular have been instrumental in terms of reading through drafts and in
providing constructive challenges, intellectual critique, moral support, and
fun times in equal proportions—thank you so much Kristin Veel, Henriette
Steiner, and Daniela Agostinho. Marianne Ping-Huang has further offered
invaluable support to this project and her theoretical and practical
engagement with digital archives and academic infrastructures continues to be
a source of inspiration. I am also immensely grateful to all the people
working on or with mass digitization who generously volunteered their time to
share with me their visions for, and perspectives on, mass digitization.

This book has further benefited greatly from dialogues taking place within the
framework of two larger research projects, which I have been fortunate enough
to be involved in: Uncertain Archives and The Past’s Future. I am very
grateful to all my colleagues in both these research projects: Kristin Veel,
Daniela Agostinho, Annie Ring, Katrine Dirkinck-Holmfeldt, Pepita Hesselberth,
Kristoffer Ørum, Ekaterina Kalinina Anders Søgaard as well as Helle Porsdam,
Jeppe Eimose, Stina Teilmann, John Naughton, Jeffrey Schnapp, Matthew Battles,
and Fiona McMillan. I am further indebted to La Vaughn Belle, George Tyson,
Temi Odumosu, Mathias Danbolt, Mette Kia, Lene Asp, Marie Blønd, Mace Ojala,
Renee Ridgway, and many others for our conversations on the ethical issues of
the mass digitization of colonial material. I have also benefitted from the
support and insights offered by other colleagues at the Department of Arts and
Cultural Studies, University of Copenhagen.

A big part of writing a book is also about keeping sane, and for this you need
great colleagues that can pull you out of your own circuit and launch you into
other realms of inquiry through collaboration, conversation, or just good
times. Thank you Mikkel Flyverbom, Rasmus Helles, Stine Lomborg, Helene
Ratner, Anders Koed Madsen, Ulrik Ekman, Solveig Gade, Anna Leander, Mareile
Kaufmann, Holger Schulze, Jakob Kreutzfeld, Jens Hauser, Nan Gerdes, Kerry
Greaves, Mikkel Thelle, Mads Rosendahl Thomsen, Knut Ove Eliassen, Jens-Erik
Mai, Rikke Frank Jørgensen, Klaus Bruhn Jensen, Marisa Cohn, Rachel Douglas-
Jones, Taina Bucher, and Baki Cakici. To this end you also need good
friends—thank you Thomas Lindquist Winther-Schmidt, Mira Jargil, Christian
Sønderby Jepsen, Agnete Sylvest, Louise Michaëlis, Jakob Westh, Gyrith Ravn,
Søren Porse, Jesper Værn, Jacob Thorsen, Maia Kahlke, Josephine Michau, Lærke
Vindahl, Chris Pedersen, Marianne Kiertzner, Rebecca Adler-Nissen, Stig
Helveg, Ida Vammen, Alejandro Savio, Lasse Folke Henriksen, Siine Jannsen,
Rens van Munster, Stephan Alsman, Sayuri Alsman, Henrik Moltke, Sean Treadway,
and many others. I also have to thank Christer and all the people at
Alimentari and CUB Coffee who kept my caffeine levels replenished when I tired
of the ivory tower.

I am furthermore very grateful for the wonderful guidance and support from MIT
Press, including Noah Springer, Marcy Ross, and Susan Clark—and of course for
the many inspiring conversations with and feedback from Doug Sery. I also want
to thank the anonymous peer reviewers whose insightful and constructive
comments helped improve this book immensely. Research for this book was
supported by grants from the Danish Research Council and the Velux Foundation.

Last, but not least, I wish to thank my loving partner Thomas Gammeltoft-
Hansen for his invaluable and critical input, optimistic outlook, and perfect
morning cappuccinos; my son Georg and daughter Liv for their general
awesomeness; and my extended family—Susanne, Bodil, and Hans—for their support
and encouragement.

I dedicate this book to my parents, Karen Lise Bonde Thylstrup and Asger
Thylstrup, without whom neither this book nor I would have materialized.

# I
Framing Mass Digitization

# 1
Understanding Mass Digitization

## Introduction

Mass digitization is first and foremost a professional concept. While it has
become a disciplinary buzzword used to describe large-scale digitization
projects of varying scope, it enjoys little circulation beyond the confines of
information science and such projects themselves. Yet, as this book argues, it
has also become a defining concept of our time. Indeed, it has even attained
the status of a cultural and moral imperative and obligation.1 Today, anyone
with an Internet connection can access hundreds of millions of digitized
cultural artifacts from the comfort of their desk—or many other locations—and
cultural institutions and private bodies add thousands of new cultural works
to the digital sphere every day. The practice of mass digitization is forming
new nexuses of knowledge, and new ways of engaging with that knowledge. What
at first glance appears to be a simple act of digitization (the transformation
of singular books from boundary objects to open sets of data), reveals, on
closer examination, a complex process teeming with diverse political, legal,
and cultural investments and controversies.

This volume asks why mass digitization has become such a “matter of concern,”2
and explores its implications for the politics of cultural memory. In
practical terms, mass digitization is digitization on an industrial scale. But
in cultural terms, mass digitization is much more than this. It is the promise
of heightened access to—and better preservation of—the past, and of more
original scholarship and better funding opportunities. It also promises
entirely new ways of reading, viewing, and structuring archives, new forms of
value and their extraction, and new infrastructures of control. This volume
argues that the shape-shifting quality of mass digitization, and its social
dynamics, alters the politics of cultural memory institutions. Two movements
simultaneously drive mass digitization programs: the relatively new phenomenon
of big data gold rushes, and the historically more familiar archival
accumulative imperative. Yet despite these prospects, mass digitization
projects are also uphill battles. They are costly and speculative processes,
with no guaranteed rate of return, and they are constantly faced by numerous
limitations and contestations on legal, social, and cultural levels.
Nevertheless, both public and private institutions adamantly emphasize the
need to digitize on a massive scale, motivating initiatives around the
globe—from China to Russia, Africa to Europe, South America to North America.
Some of these initiatives are bottom-up projects driven by highly motivated
individuals, while others are top-down and governed by complex bureaucratic
apparatuses. Some are backed by private money, others publically funded. Some
exist as actual archives, while others figure only as projections in policy
papers. As the ideal of mass digitization filters into different global
empirical situations, the concept of mass digitization attains nuanced
political hues. While all projects formally seek to serve the public interest,
they are in fact infused with much more diverse, and often conflicting,
political and commercial motives and dynamics. The same mass digitization
project can even be imbued with different and/or contradictory investments,
and can change purpose and function over time, sometimes rapidly.

Mass digitization projects are, then, highly political. But they are not
political in the sense that they transfer the politics of analog cultural
memory institutions into the digital sphere 1:1, or even liberate cultural
memory artifacts from the cultural politics of analog cultural memory
institutions. Rather, mass digitization presents a new political cultural
memory paradigm, one in which we see strands of technical and ideological
continuities combine with new ideals and opportunities; a political cultural
memory paradigm that is arguably even more complex—or at least appears more
messy to us now—than that of analog institutions, whose politics we have had
time to get used to. In order to grasp the political stakes of mass
digitization, therefore, we need to approach mass digitization projects not as
a continuation of the existing politics of cultural memory, or as purely
technical endeavors, but rather as emerging sociopolitical and sociotechnical
phenomena that introduce new forms of cultural memory politics.

## Framing, Mapping, and Diagnosing Mass Digitization

Interrogating the phenomenon of mass digitization, this book asks the question
of how mass digitization affects the politics of cultural memory institutions.
As a matter of practice, something is clearly changing in the conversion of
bounded—and scarce—historical material into ubiquitous ephemeral data. In
addition to the technical aspects of digitization, mass digitization is also
changing the political territory of cultural memory objects. Global commercial
platforms are increasingly administering and operating their scanning
activities in favor of the digital content they reap from the national “data
tombs” of museums and libraries and the feedback loops these generate. This
integration of commercial platforms into the otherwise primarily public
institutional set-up of cultural memory has produced a reconfiguration of the
political landscape of cultural memory from the traditional symbolic politics
of scarcity, sovereignty, and cultural capital to the late-sovereign
infrapolitics of standardization and subversion.

The empirical outlook of the present book is predominantly Western. Yet, the
overarching dynamics that have been pursued are far from limited to any one
region or continent, nor limited solely to the field of cultural memory.
Digitization is a global phenomenon and its reliance on late-sovereign
politics and subpolitical governance forms are shared across the globe.

The central argument of this book is that mass digitization heralds a new kind
of politics in the regime of cultural memory. Mass digitization of cultural
memory is neither a neutral technical process nor a transposition of the
politics of analog cultural heritage to the digital realm on a 1:1 scale. The
limitations of using conventional cultural-political frameworks for
understanding mass digitization projects become clear when working through the
concepts and regimes of mass digitization. Mass digitization brings together
so many disparate interests and elements that any mono-theoretical lens would
fail to account for the numerous political issues arising within the framework
of mass digitization. Rather, mass digitization should be approached as an
_infrapolitical_ process that brings together a multiplicity of interests
hitherto foreign to the realm of cultural memory.

The first part of the book, “framing,” outlines the theoretical arguments in
the book—that the political dynamics of mass digitization organize themselves
around the development of the technical infrastructures of mass digitization
in late-sovereign frameworks. Fusing infrastructure theory and theories on the
political dynamics of late sovereignty allows us to understand mass
digitization projects as cultural phenomena that are highly dependent on
standardization and globalization processes, while also recognizing that their
resultant infrapolitics can operate as forms of both control and subversion.

The second part of the book, “mapping,” offers an analysis of three different
mass digitization phenomena and how they relate to the late-sovereign politics
that gave rise to them. The part thus examines the historical foundation,
technical infrastructures, and (il)licit status and ideological underpinnings
of three variations of mass digitization projects: primarily corporate,
primarily public, and primarily private. While these variations may come
across as reproductions of more conventional societal structures, the chapters
in part two nevertheless also present us with a paradox: while the different
mass digitization projects that appear in this book—from Google’s privatized
endeavor to Europeana’s supranational politics to the unofficial initiatives
of shadow libraries—have different historical and cultural-political
trajectories and conventional regimes of governance, they also undermine these
conventional categories as they morph and merge into new infrastructures and
produce a new form of infrapolitics. The case studies featured in this book
are not to be taken as exhaustive examples, but rather as distinct, yet
nevertheless entangled, examples of how analog cultural memory is taken online
on a digital scale. They have been chosen with the aim of showing the
diversity of mass digitization, but also how it, as a phenomenon, ultimately
places the user in the dilemma of digital capitalism with its ethos of access,
speed, and participation (in varying degrees). The choices also have their
limitations, however. In their Western bias, which is partly rooted in this
author’s lack of language skills (specifically in Russian and Chinese), for
instance, they fail to capture the breadth and particularities of the
infrapolitics of mass digitization in other parts of the world. Much more
research is needed in this area.

The final part of the book, “diagnosing,” zooms in on the pathologies of mass
digitization in relation to affective questions of desire and uncertainty.
This part argues that instead of approaching mass digitization projects as
rationalized and instrumental projects, we should rather acknowledge them as
ambivalent spatio-temporal projects of desire and uncertainty. Indeed, as the
third part concludes, it is exactly uncertainty and desire that organizes the
new spatio-temporal infrastructures of cultural memory institutions, where
notions such as serendipity and the infrapolitics of platforms have taken
precedence over accuracy and sovereign institutional politics. The third part
thus calls into question arguments that imagine mass digitization as
instrumentalized projects that either undermine or produce values of
serendipity, as well as overarching narratives of how mass digitization
produces uncomplicated forms of individualized empowerment and freedom.
Instead, the chapter draws attention to the new cultural logics of platforms
that affect the cultural politics of mass digitization projects.

Crucially, then, this book seeks neither to condemn nor celebrate mass
digitization, but rather to unpack the phenomenon and anchor it in its
contemporary political reality. It offers a story of the ways in which mass
digitization produces new cultural memory institutions online that may be
entwined in the cultural politics of their analog origins, but also raises new
political questions to the collections.

## Setting the Stage: Assembling the Motley Crew of Mass Digitization

The dream and practice of mass digitizing cultural works has been around for
decades and, as this section attests, the projects vary significantly in
shape, size, and form. While rudimentary and nonexhaustive, this section
gathers a motley collection of mass digitization initiatives, from some of the
earliest digitization programs to later initiatives. The goal of this section
is thus not so much to meticulously map mass digitization programs, but rather
to provide examples of projects that might illuminate the purpose of this book
and its efforts to highlight the infrastructural politics of mass
digitization. As the section attests, mass digitization is anything but a
streamlined process. Rather, it is a painstakingly complex process mired in
legal, technical, personal, and political challenges and problems, and it is a
vision whose grand rhetoric often works to conceal its messy reality.

It is pertinent to note that mass digitization suffers from the combined
gendered and racialized reality of cultural institutions, tech corporations,
and infrastructural projects: save a few exceptions, there is precious little
diversity in the official map of mass digitization, even in those projects
that emerge bottom-up. This does not mean that women and minorities have not
formed a crucial part of mass digitization, selecting cultural objects,
prepping them (for instance ironing newspapers to ensure that they are flat),
scanning them, and constructing their digital infrastructures. However, more
often than not, their contributions fade into the background as tenders of the
infrastructures of mass digitization rather than as the (predominantly white,
male) “face” of mass digitization. As such, an important dimension of the
politics of these infrastructural projects is their reproduction of
established gendered and racialized infrastructures already present in both
cultural institutions and the tech industry.3 This book hints at these crucial
dimensions of mass digitization, but much more work is needed to change the
familiar cast of cultural memory institutions, both in the analog and digital
realms.

With these introductory remarks in place, let us now turn to the long and
winding road to mass digitization as we know it today. Locating the exact
origins of this road is a subjective task that often ends up trapping the
explorer in the mirror halls of technology. But it is worth noting that of
course there existed, before the Internet, numerous attempts at capturing and
remediating books in scalable forms, for the purposes both of preservation and
of extending the reach of library collections. One of the most revolutionary
of such technologies before the digital computer or the Internet was
microfilm, which was first held forth as a promising technology of
preservation and remediation in the middle of the 1800s.4 At the beginning of
the twentieth century, the Belgian author, entrepreneur, visionary, lawyer,
peace activist, and one of the founders of information science, Paul Otlet,
brought the possibilities of microfilm to bear directly on the world of
libraries. Otlet authored two influential think pieces that outlined the
benefits of microfilm as a stable and long-term remediation format that could,
ultimately, also be used to extend the reach of literature, just as he and his
collaborator, inventor and engineer Robert Goldschmidt, co-authored a work on
the new form of the book through microphotography, _Sur une forme nouvelle du
livre: le livre microphotographique_. 5 In his analyses, Otlet suggested that
the most important transformations would not take place in the book itself,
but in substitutes for it. Some years later, beginning in 1927 with the
Library of Congress microfilming more than three million pages of books and
manuscripts in the British Library, the remediation of cultural works in
microformat became a widespread practice across the world, and microfilm is
still in use to this day.6 Otlet did not confine himself to thinking only
about microphotography, however, but also pursued a more speculative vein,
inspired by contemporary experiments with electromagnetic waves, arguing that
the most radical change of the book would be wireless technology. Moreover, he
also envisioned and partly realized a physical space, _Mundaneum_ , for his
dreams of a universal archive. Paul Otlet and Nobel Peace Prize Winner Henri
La Fontaine conceived of Mundaneum in 1895 as part of their work on
documentation science. Otlet called the Mundaneum “… an Idea, an Institution,
a Method, a Body of work materials and collections, a Building, a Network.” In
more concrete, but no less ambitious terms, the Mundaneum was to gather
together all the world’s knowledge and classify it according to a universal
system they developed called the “Universal Decimal Classification.” In 1910,
Otlet and Fontaine found a place for their work in the Palais du
Cinquantenaire, a government building in Brussels. Later, Otlet commissioned
Le Corbusier to design a building for the Mundaneum in Geneva. The cooperation
ended unsuccesfully, however, and it later led a nomadic life, moving from The
Hague to Brussels and then in 1993 to the city of Mons in Belgium, where it
now exists as a museum called the Mundaneum Archive Center. Fatefully, Mons, a
former mining district, also houses Google’s largest data center in Europe and
it did not take Google long to recognize the cultural value in entering a
partnership with the Mundaneum, the two parties signing a contract in 2013.
The contract entailed among other things that Google would sponsor a traveling
exhibit on the Mundaneum, as well as a series of talks on Internet issues at
the museum and the university, and that the Mundaneum would use Google’s
social networking service, Google Plus, as a promotional tool. An article in
the _New York Times_ described the partnership as “part of a broader campaign
by Google to demonstrate that it is a friend of European culture, at a time
when its services are being investigated by regulators on a variety of
fronts.” 7 The collaboration not only spurred international interest, but also
inspired a group of influential tech activists and artists closely associated
with the creative work of shadow libraries to create the critical archival
project Mondotheque.be, a platform for “discussing and exploring the way
knowledge is managed and distributed today in a way that allows us to invent
other futures and different narrations of the past,”8 and a resulting digital
publication project, _The Radiated Book,_ authored by an assembly of
activists, artists, and scholars such as Femke Snelting, Tomislav Medak,
Dusan Barok, Geraldine Juarez, Shin Joung Yeo, and Matthew Fuller. 9

Another early precursor of mass digitization emerged with Project Gutenberg,
often referred to as the world’s oldest digital library. Project Gutenberg was
the brainchild of author Michael S. Hart, who in 1971, using technologies such
as ARPANET, Bulletin Board Systems (BSS), and Gopher protocols, experimented
with publishing and distributing books in digital form. As Hart reminisced in
his later text, “The History and Philosophy of Project Gutenberg,”10 Project
Gutenberg emerged out of a donation he received as an undergraduate in 1971,
which consisted of $100 million worth of computing time on the Xerox Sigma V
mainframe at the University of Illinois at Urbana-Champaign. Wanting to make
good use of the donation, Hart, in his own words, “announced that the greatest
value created by computers would not be computing, but would be the storage,
retrieval, and searching of what was stored in our libraries.”11 He therefore
committed himself to converting analog cultural works into digital text in a
format not only available to, but also accessible/readable to, almost all
computer systems: “Plain Vanilla ASCII” (ASCII for “American Standard Code for
Information Interchange”). While Project Gutenberg only converted about 50
works into digital text in the 1970s and the 1980s (the first was the
Declaration of Independence), it today hosts up to 56,000 texts in its
distinctly lo-fi manner.12 Interestingly, Michael S. Hart noted very early on
that the intention of the project was never to reproduce authoritative
editions of works for readers—“who cares whether a certain phrase in
Shakespeare has a ‘:’ or a ‘;’ between its clauses”—but rather to “release
etexts that are 99.9% accurate in the eyes of the general reader.”13 As the
present book attests, this early statement captures one of the central points
of contestation in mass digitization: the trade-off between accuracy and
accessibility, raising questions both of the limits of commercialized
accelerated digitization processes (see chapter 2 on Google Books) and of
class-based and postcolonial implications (see chapter 4 on shadow libraries).

If Project Gutenberg spearheaded the efforts of bringing cultural works into
the digital sphere through manual conversion of analog text into lo-fi digital
text, a French mass digitization project affiliated with the construction of
the Bibliothèque nationale de France (BnF) initiated in 1989 could be
considered one of the earliest examples of actually digitizing cultural works
on an industrial scale.14 The French were thus working on blueprints of mass
digitization programs before mass digitization became a widespread practice __
as part of the construction of a new national library, under the guidance of
Alain Giffard and initiated by François Mitterand. In a letter sent in 1990 to
Prime Minister Michel Rocard, President Mitterand outlined his vision of a
digital library, noting that “the novelty will be in the possibility of using
the most modern computer techniques for access to catalogs and documents of
the Bibliothèque nationale de France.”15 The project managed to digitize a
body of 70,000–80,000 titles, a sizeable amount of works for its time. As
Alain Giffard noted in hindsight, “the main difficulty for a digitization
program is to choose the books, and to choose the people to choose the
books.”16 Explaining in a conversation with me how he went about this task,
Giffard emphasized that he chose “not librarians but critics, researchers,
etc.” This choice, he underlined, could be made only because the digitization
program was “the last project of the president and a special mission” and thus
not formally a civil service program.17 The work process was thus as follows:

> I asked them to prepare a list. I told them, “Don’t think about what exists.
I ask of you a list of books that would be logical in this concept of a
library of France.” I had the first list and we showed it to the national
library, which was always fighting internally. So I told them, “I want this
book to be digitized.” But they would never give it to us because of
territory. Their ship was not my ship. So I said to them, “If you don’t give
me the books I shall buy the books.” They said I could never buy them, but
then I started buying the books from antiques suppliers because I earned a lot
of money at that time. So in the end I had a lot of books. And I said to them,
“If you want the books digitized you must give me the books.” But of the
80,000 books that were digitized, half were not in the collection. I used the
staff’s garages for the books, 80,000 books. It is an incredible story.18

Incredible indeed. And a wonderful anecdote that makes clear that mass
digitization, rather than being just a technical challenge, is also a
politically contingent process that raises fundamental questions of territory
(institutional as well as national), materiality, and culture. The integration
of the digital _très grande bibliothèque_ into the French national mass
digitization project Gallica, later in 1997, also foregrounds the
infrastructural trajectory of early national digitization programs into later
glocal initiatives. 19

The question of pan-national digitization programs was precisely at the
forefront of another early prominent mass digitization project, namely the
Universal Digital Library (UDL), which was launched in 1995 by Carnegie Mellon
computer scientist Raj Reddy and developed by linguist Jaime Carbonell,
physicist Michael Shamos, and Carnegie Mellon Foundation dean of libraries
Gloriana St. Clair. In 1998, the project launched the Thousand Book Project.
Later, the UDL scaled its initial efforts up to the Million Book Project,
which they successfully completed in 2007.20 Organizationally, the UDL stood
out from many of the other digitization projects by including initial
participation from three non-Western entities in addition to the Carnegie
Mellon Foundation—the governments of India, China, and Egypt.21 Indeed, India
and China invested about $10 million in the initial phase, employing several
hundred people to find books, bring them in, and take them back. While the
project ambitiously aimed to provide access “to all human knowledge, anytime,
anywhere,” it ended its scanning activities 2008. As such, the Universal
Digital Library points to another central infrastructural dimension of mass
digitization: its highly contingent spatio-temporal configurations that are
often posed in direct contradistinction to the universalizing discourse of
mass digitization. Across the board, mass digitization projects, while
confining themselves in practice to a limited target of how many books they
will digitize, employ a discourse of universality, perhaps alluding vaguely to
how long such an endeavor will take but in highly uncertain terms (see
chapters 3 and 5 in particular).

No exception from the universalizing discourse, another highly significant
mass digitization project, the Internet Archive, emerged around the same time
as the Universal Digital Library. The Internet Archive was founded by open
access activist and computer engineer Brewster Kahle in 1996, and although it
was primarily oriented toward preserving born-digital material, in particular
the Internet ( _Wired_ calls Brewster Kahle “the Internet’s de facto
librarian” 22), the Archive also began digitizing books in 2005, supported by
a grant from the Alfred Sloan Foundation. Later that year, the Internet
Archive created the infrastructural initiative, Open Content Alliance (OCA),
and was now embedded in an infrastructure that included over 30 major US
libraries, as well as major search engines (by Yahoo! and Microsoft),
technology companies (Adobe and Xerox), a commercial publisher (O’Reilly
Media, Inc.), and a not-for-profit membership organization of more than 150
institutions, including universities, research libraries, archives, museums,
and historical societies.23 The Internet Archive’s mass digitization
infrastructure was thus from the beginning a mesh of public and private
cooperation, where libraries made their collections available to the Alliance
for scanning, and corporate sponsors or the Internet Archive conversely funded
the digitization processes. As such, the infrastructures of the Internet
Archive and Google Books were rather similar in their set-ups.24 Nevertheless,
the initiative of the Internet Archive’s mass digitization project and its
attendant infrastructural alliance, OCA, should be read as both a technical
infrastructure responding to the question of _how_ to mass digitize in
technical terms, and as an infrapolitical reaction in response to the forces
of the commercial world that were beginning to gather around mass
digitization, such as Amazon 25 and Google. The Internet Archive thus
positioned itself as a transparent open source alternative to the closed doors
of corporate and commercial initiatives. Yet, as Kalev Leetaru notes, the case
was more complex than that. Indeed, while the OCA was often foregrounded as
more transparent than Google, their technical infrastructural components and
practices were in fact often just as shrouded in secrecy.26 As such, the
Internet Archive and the OCA draw attention to the important infrapolitical
question in mass digitization, namely how, why, and when to manage
visibilities in mass digitization projects.

Although the media sometimes picked up stories on mass digitization projects
already outlined, it wasn’t until Google entered the scene that mass
digitization became a headline-grabbing enterprise. In 2004, Google founders
Larry Page and Sergey Brin traveled to Frankfurt to make a rare appearance at
the Frankfurt Book Fair. Google was at that time still considered a “scrappy”
Internet company in some quarters, as compared with tech giants such as
Microsoft.27 Yet Page and Brin went to Frankfurt to deliver a monumental
announcement: Google would launch a ten-year plan to make available
approximately 15 million digitized books, both in- and out-of-copyright
works.28 They baptized the program “Google Print,” a project that consisted of
a series of partnerships between Google and five English-language libraries:
the University of Michigan at Ann Arbor, Stanford, Harvard, Oxford (Bodleian
Library), and the New York City Public Library. While Page’s and Brin’s
announcement was surprising to some, many had anticipated it; as already
noted, advances toward mass digitization proper had already been made, and
some of the partnership institutions had been negotiating with Google since
2002.29 As with many of the previous mass digitization projects, Google found
inspiration for their digitization project in the long-lived utopian ideal of
the universal library, and in particular the mythic library of Alexandria.30
As with other Google endeavors, it seemed that Page was intent on realizing a
utopian ideal that scholars (and others) had long dreamed of: a library
containing everything ever written. It would be realized, however, not with
traditional human-centered means drawn from the world of libraries, but rather
with an AI approach. Google Books would exceed human constraints, taking the
seemingly impossible vision of digitizing all the books in the world as a
starting point for constructing an omniscient Artificial Intelligence that
would know the entire human symbol system and allow flexible and intuitive
recollection. These constraints were physical (how to digitize and organize
all this knowledge in physical form); legal (how to do it in a way that
suspends existing regulation); and political (how to transgress territorial
systems). The invocation of the notion of the universal library was not a
neutral action. Rather, the image of Google Books as a library worked as a
symbolic form in a cultural scheme that situated Google as a utopian, and even
ethical, idealist project. Google Books seemingly existed by virtue of
Goethe’s famous maxim that “To live in the ideal world is to treat the
impossible as if it were possible.”31 At the time, the industry magazine
_Bookseller_ wrote in response to Google’s digitization plans: “The prospect
is both thrilling and frightening for the book industry, raising a host of
technical and theoretical issues.” 32 And indeed, while some reacted with
enthusiasm and relief to the prospect of an organization being willing to
suffer the cost of mass digitization, others expressed economic and ethical
concerns. The Authors Guild, a New York–based association, promptly filed a
copyright infringement suit against Google. And librarians were forced to
revisit core ethical principles such as privacy and public access.

The controversies of Google Books initially played out only in US territory.
However, another set of concerns of a more territorial and political nature
soon came to light. The French President at the time, Jacques Chirac, called
France to cultural-political arms, urging his culture minister, Renaud
Donnedieu de Vabres, and Jean-Noël Jeanneney, then-head of France’s
Bibliothèque nationale, to do the same with French texts as Google planned to
do with their partner libraries, but by means of a French search engine.33
Jeanneney initially framed this French cultural-political endeavor as a
European “contre-attaque” against Google Books, which, according to Jeanneney,
could pose “une domination écrasante de l'Amérique dans la définition de
l'idée que les prochaines générations se feront du monde.” (“a crushing
American domination of the formation of future generations’ ideas about the
world”)34 Other French officials insisted that the French digitization project
should be seen not primarily as a cultural-political reaction _against_
Google, but rather as a cultural-political incentive within France and Europe
to make European information available online. “I really stress that it's not
anti-American,” an official at France’s Ministry of Culture and Communication,
speaking on the condition of anonymity, noted in an interview. “It is not a
reaction. The objective is to make more material relevant to European heritage
available. … Everybody is working on digitization projects.” Furthermore, the
official did not rule out potential cooperation between Google and the
European project. 35 There was no doubt, however, that the move to mass
digitization “was a political drive by the French,” as Stephen Bury, head of
European and American collections at the British Library, emphasized.36

Despite its mixed messages, the French reaction nevertheless underscored the
controversial nature of mass digitization as a symbolic, as well as technical,
aspiration: mass digitization was a process that not only neutrally scanned
and represented books but could also produce a new mode of world-making,
actively structuring archives as well as their users.37 Now questions began to
surface about where, or with whom, to place governance over this new archive:
who would be the custodian of the keys to this new library? And who would be
the librarians? A series of related questions could also be asked: who would
determine the archival limits, the relations between the secret and the non-
secret or the private and the public, and whether these might involve property
or access rights, publication or reproduction rights, classification, and
putting into order? France soon managed to rally other EU countries (Spain,
Poland, Hungary, Italy, and Germany) to back its recommendation to the
European Commission (EC) to construct a European alternative to Google’s
search engine and archive and to set this out in writing. Occasioned by the
French recommendation, the EC promptly adopted the idea of Europeana—the name
of the proposed alternative—as a “flagship project” for the budding EU
cultural policy.38 Soon after, in 2008, the EC launched Europeana, giving
access to some 4.5 million digital objects from more than 1,000 institutions.

Europeana’s Europeanizing discourse presents a territorializing approach to
mass digitization that stands in contrast to the more universalizing tone of
Mundaneum, Gutenberg, Google Books, and the Universal Digital Library. As
such, it ties in with our final examples, namely the sovereign mass
digitization projects that have in fact always been one of the primary drivers
in mass digitization efforts. To this day, the map of mass digitization is
populated with sovereign mass digitization efforts from Holland and Norway to
France and the United States. One of the most impressive projects is the
Norwegian mass digitization project at the National Library of Norway, which
since 2004 has worked systematically to develop a digital National Library
that encompasses text, audio, video, image, and websites. Impressively, the
National Library of Norway offers digital library services that provide online
access (to all with a Norwegian IP address) to full-text versions of all books
published in Norway up until the year 2001, access to digital newspaper
collections from the major national and regional newspapers in all libraries
in the country, and opportunities for everyone with Internet access to search
and listen to more than 40,000 radio programs recorded between 1933 and the
present day.39 Another ambitious national mass digitization project is the
Dutch National Library’s effort to digitize all printed publications since
1470 and to create a National Platform for Digital Publications, which is to
act both as a content delivery platform for its mass digitization output and
as a national aggregator for publications. To this end, the Dutch National
Library made deals with Google Books and Proquest to digitize 42 million pages
just as it entered into partnerships with cross-domain aggregators such as
Europeana.40 Finally, it is imperative to mention the Digital Public Library
of America (DPLA), a national digital library conceived of in 2010 and
launched in 2013, which aggregates digital collections of metadata from around
the United States, pulling in content from large institutions like the
National Archives and Records Administration and HathiTrust, as well as from
smaller archives. The DPLA is in great part the fruit of the intellectual work
of Harvard University’s Berkman Center for Internet and Society and the work
of its Steering Committee, which consisted of influential names from the
digital, legal, and library worlds, such as Robert Darnton, Maura Marx, and
John Palfrey from Harvard University; Paul Courant of the University of
Michigan; Carla Hayden, then of Baltimore’s Enoch Pratt Free Library and
subsequently the Librarian of Congress; Brewster Kahle; Jerome McGann; Amy
Ryan of the Boston Public Library; and Doron Weber of the Sloan Foundation.
Key figures in the DPLA have often to great rhetorical effect positioned DPLA
vis-à-vis Google Books, partly as a question of public versus private
infrastructures.41 Yet, as the then-Chairman of DPLA John Palfrey conceded,
the question of what constitutes “public” in a mass digitization context
remains a critical issue: “The Digital Public Library of America has its
critics. One counterargument is that investments in digital infrastructures at
scale will undermine support for the traditional and the local. As the
chairman of the DPLA, I hear this critique in the question-and-answer period
of nearly every presentation I give. … The concern is that support for the
DPLA will undercut already eroding support for small, local public
libraries.”42 While Palfrey offers good arguments for why the DPLA could
easily work in unison with, rather than jeopardize, smaller public libraries,
and while the DPLA is building infrastructures to support this claim,43 the
discussion nevertheless highlights the difficulties with determining when
something is “public,” and even national.

While the highly publicized and institutionalized projects I have just
recounted have taken center stage in the early and later years of mass
digitization, they neither constitute the full cast, nor the whole machinery,
of mass digitization assemblages. Indeed, as chapter 4 in this book charts, at
the margins of mass digitization another set of actors have been at work
building new digital cultural memory assemblages, including projects such as
Monoskop and Lib.ru. These actors, referred to in this book as shadow library
projects (see chapter 4), at once both challenge and confirm the broader
infrapolitical dimensions of mass digitization, including its logics of
digital capitalism, network power, and territorial reconfigurations of
cultural memory between universalizing and glocalizing discourses. Within this
new “ecosystem of access,” unauthorized archives as Libgen, Gigapedia, and
Sci-Hub have successfully built “shadow libraries” with global reach,
containing massive aggregations of downloadable text material of both
scholarly and fictional character.44 As chapter 4 shows, these initiatives
further challenge our notions of public good, licit and illicit mass
digitization, and the territorial borders of mass digitization, just as they
add another layer of complexity to the question of the politics of mass
digitization.

Today, then, the landscape of mass digitization has evolved considerably, and
we can now begin to make out the political contours that have shaped, and
continue to shape, the emergent contemporary knowledge infrastructures of mass
digitization, ripe as they are with contestation, cooperation, and
competition. From this perspective, mass digitization appears as a preeminent
example of how knowledge politics are configured in today’s world of
“assemblages” as “multisited, transboundary networks” that connect
subnational, national, supranational, and global infrastructures and actors,
without, however, necessarily doing so through formal interstate systems.45 We
can also see that mass digitization projects did not arise as a result of a
sovereign decision, but rather emerged through a series of contingencies
shaped by late-capitalist and late-sovereign forces. Furthermore, mass
digitization presents us with an entirely new cultural memory paradigm—a
paradigm that requires a shift in thinking about cultural works, collections,
and contexts, from cultural records to be preserved and read by humans, to
ephemeral machine-readable entities. This change requires a shift in thinking
about the economy of cultural works, collections, and contexts, from scarce
institutional objects to ubiquitous flexible information. Finally, it requires
a shift in thinking about these same issues as belonging to national-global
domains to conceiving them in terms of a set of political processes that may
well be placed in national settings, but are oriented toward global agendas
and systems.

## Interrogating Mass Digitization

Mass digitization is often elastic in definition and elusive in practice.
Concrete attempts have been made to delimit what mass digitization is, but
these rarely go into specifics. The two characteristics most commonly
associated with mass digitization are the relative lack of selectivity of
materials, as compared to smaller-scale digitization projects, and the high
speed and high volume of the process in terms of both digital conversion and
metadata creation, which are made possible through a high level of
automation.46 Mass digitization is thus concerned not only with preservation,
but also with what kind of knowledge practices and values technology allows
for and encourages, for example, in relation to de- and recontextualization,
automation, and scale.47

Studies of mass digitization are commonly oriented toward technology or
information policy issues close to libraries, such as copyright, the quality
of digital imagery, long-term preservation responsibility, standards and
interoperability, and economic models for libraries, publishers, and
booksellers, rather than, as here, the exploration of theory.48 This is not to
say that existing work on mass digitization is not informed by theoretical
considerations, but rather that the majority of research emphasizes policy and
technical implementation at the expense of a more fundamental understanding of
the cultural implications of mass digitization. In part, the reason for this
is the relative novelty of mass digitization as an identifiable field of
practice and policy, and its significant ramifications in the fields of law
and information science.49 In addition to scholarly elucidations, mass
digitization has also given rise to more ideologically fuelled critical books
and articles on the topic.50

Despite its disciplinary branching, work on mass digitization has mainly taken
place in the fields of information science, law, and computer science, and has
primarily problematized the “hows” of mass digitization and not the “whys.”51
As with technical work on mass digitization, most nontechnical studies of mass
digitization are “problem-solving” rather than “critical,” and this applies in
particular to work originating from within the policy analysis community. This
body seeks to solve problems within the existing social order—for example,
copyright or metadata—rather than to interrogate the assumptions that underlie
mass digitization programs, which would include asking what kinds of knowledge
production mass digitization gives rise to. How does mass digitization change
the ideological infrastructures of cultural heritage institutions? And from
what political context does the urge to digitize on an industrial scale
emerge? While the technical and problem-solving corpus on mass digitization is
highly valuable in terms of outlining the most important stakeholders and
technical issues of the field, it does not provide insight into the deeper
structures, social mechanisms, and political implications of mass
digitization. Moreover, it often fails to account for digitization as a force
that is deeply entwined with other dynamics that shape its development and
uses. It is this lack that the present volume seeks to mitigate.

## Assembling Mass Digitization

Mass digitization is a composite and fluctuating infrastructure of
disciplines, interests, and forces rooted in public-private assemblages,
driven by ideas of value extraction and distribution, and supported by new
forms of social organization. Google Books, for instance, is both a commercial
project covered by nondisclosure agreements _and_ an academic scholarly
project open for all to see. Similarly, Europeana is both a public
digitization project directed at “citizens” _and_ a public-private partnership
enterprise ripe with profit motives. Nevertheless, while it is tempting to
speak about specific mass digitization projects such as Google Books and
Europeana in monolithic and contrastive terms, mass digitization projects are
anything but tightly organized, institutionally delineated, coherent wholes
that produce one dominant reading. We do not find one “essence” in mass
digitized archives. They are not “enlightenment projects,” “library services,”
“software applications,” “interfaces,” or “corporations.” Nor are they rooted
in one central location or single ideology. Rather, mass digitization is a
complex material and social infrastructure performed by a diverse
constellation of cultural memory professionals, computer scientists,
information specialists, policy personnel, politicians, scanners, and
scholars. Hence, this volume approaches mass digitization projects as
“assemblages,” that is, as contingent arrangements consisting of humans,
machines, objects, subjects, spaces and places, habits, norms, laws, politics,
and so on. These arrangements cross national-global and public-private lines,
producing what this volume calls “late-sovereign,” “posthuman,” and “late-
capitalist” assemblages.

To give an example, we can look at how the national and global aspects of
cultural memory institutions change with mass digitization. The national
museums and libraries we frequent today were largely erected during eras of
high nationalism, as supreme acts of cultural and national territoriality.
“The early establishment of a national collection,” as Belinda Tiffen notes,
“was an important step in the birth of the new nation,” since it signified
“the legitimacy of the nation as a political and cultural entity with its own
heritage and culture worthy of being recorded and preserved.”52 Today, as the
initial French incentive to build Europeana shows, we find similar
nationalization processes in mass digitization projects. However,
nationalizing a digital collection often remains a performative gesture than a
practical feat, partly because the information environment in the digital
sphere differs significantly from that of the analog world in terms of
territory and materiality, and partly because the dichotomy between national
and global, an agreed-upon construction for centuries, is becoming more and
more difficult to uphold in theory and practice.53 Thus, both Google Books and
Europeana link to sovereign frameworks such as citizens and national
representation, while also undermining them with late-capitalist transnational
economic agreements.

A related example is the posthuman aspect of cultural memory politics.
Cultural memory artifacts have always been thought of as profoundly human
collections, in the sense that they were created by and for human minds and
human meaning-making. Previously, humans also organized collections. But with
the invention of computers, most cultural memory institutions also introduced
a machine element to the management of accelerating amounts of information,
such as computerized catalog systems and recollection systems. With the advent
of mass digitization, machines have gained a whole new role in the cultural
memory ecosystem, not only as managers, but also as interpreters. Thus,
collections are increasingly digitized to be read by machines instead of
humans, just as metadata is now becoming a question of machine analysis rather
than of human contextualization. Machines are taking on more and more tasks in
the realm of cultural memory that require a substantial amount of cognitive
insight (just as mass digitization has created the need for new robot-like,
and often poorly paid, human tasks, such as the monotonous work of book
scanning). Mass digitization has thereby given rise to an entirely new
cultural-legal category titled “non-consumptive research,” a term used to
describe the large-scale analysis of texts, and which has been formalized by
the Google Books Settlement, for instance, in the following way: “research in
which computational analysis is performed on one or more books, but not
research in which a researcher reads or displays.”54

Lastly, mass digitization connects the politics of cultural memory to
transnational late capitalism, and to one of its expressions in particular:
digital capitalism.55 Of course, cultural memory collections have a long
history with capitalism. The nineteenth century held very fuzzy boundaries
between the cultural functions of libraries and the commercial interests that
surrounded them, and, as historian of libraries Francis Miksa notes, Melvin
Dewey, inventor of the Dewey Decimal System, was a great admirer of the
corporate ideal, and was eager to apply it to the library system.56 Indeed,
library development in the United States was greatly advanced by the
philanthropy of capitalism, most notably by Andrew Carnegie.57 The question,
then, is not so much whether mass digitization has brought cultural memory
institutions, and their collections and users, into a capitalist system, but
_what kind_ of capitalist system mass digitization has introduced cultural
memory to: digital capitalism.

Today, elements of the politics of cultural memory are being reassembled into
novel knowledge configurations. As a consequence, their connections and
conjugations are being transformed, as are their institutional embeddings.
Indeed, mass digitization assemblages are a product of our time. They are new
forms of knowledge institutions arising from a sociopolitical environment
where vertical territorial hierarchies and horizontal networks entwine in a
new political mesh: where solid things melt into air, and clouds materialize
as material infrastructures, where boundaries between experts and laypeople
disintegrate, and where machine cognition operates on a par with human
cognition on an increasingly large scale. These assemblages enable new types
of political actors—networked assemblages—which hold particular forms of power
despite their informality vis-à-vis the formal political system; and in turn,
through their practices, these actors partly build and shape those
assemblages.

Since concepts always respond to “a specific social and historical situation
of which an intellectual occasion is part,”58 it is instructive to revisit the
1980s, when the theoretical notion of assemblage emerged and slowly gained
cross-disciplinary purchase.59 Around this time, the stable structures of
modernist institutions began to give ground to postmodern forces: sovereign
systems entered into supra-, trans-, and international structures,
“globalization” became a buzzword, and privatizing initiatives drove wedges
into the foundations of state structures. The centralized power exercised by
disciplinary institutions was increasingly distributed along more and more
lines, weakening the walls of circumscribed centralized authority.60 This
disciplinary decomposition took place on all levels and across all fields of
society, including institutional cultural memory containers such as libraries
and museums. The forces of privatization, globalization, and digitization put
pressures not only on the authority of these institutions but also on a host
of related authoritative cultural memory elements, such as “librarians,”
“cultural works,” and “taxonomies,” and cultural memory practices such as
“curating,” “reading,” and “ownership.” Librarians were “disintermediated” by
technology, cultural works fragmented into flexible data, and curatorial
principles were revised and restructured just as reading was now beginning to
take place in front of screens, meaning-making to be performed by machines,
and ownership of works to be substituted by contractual renewals.

Thinking about mass digitization as an “assemblage” allows us to abandon the
image of a circumscribed entity in favor of approaching it as an aggregate of
many highly varied components and their contingent connections: scanners,
servers, reading devices, cables, algorithms; national, EU, and US
policymakers; corporate CEOs and employees; cultural heritage professionals
and laypeople; software developers, engineers, lobby organizations, and
unsalaried labor; legal settlements, academic conferences, position papers,
and so on. It gives us pause—every time we say “Google” or “Europeana,” we
might reflect on what we actually mean. Does the researcher employed by a
university library and working with Google Books also belong to Google Books?
Do the underpaid scanners? Do the users of Google? Or, when we refer to Google
Books, do we rather only mean to include the founders and CEOs of Google? Or
has Google in fact become a metaphor that expresses certain characteristics of
our time? The present volume suggests that all these components enter into the
new phenomenon of mass digitization and produce a new field of potentiality,
while at the same time they retain their original qualities and value systems,
at least to some extent. No assemblage is whole and imperturbable, nor
entirely reducible to its parts, but is simultaneously an accumulation of
smaller assemblages and a member of larger ones.61 Thus Google Books, for
example, is both an aggregation of smaller assemblages such as university
libraries, scanners (both humans and machines), and books, _and_ a member of
larger assemblages such as Google, Silicon Valley, neoliberal lobbies, and the
Internet, to name but a few.

While representations of assemblages such as the analyses performed in this
volume are always doomed to misrepresent empirical reality on some level, this
approach nevertheless provides a tool for grasping at least some of mass
digitization’s internal heterogeneity, and the mechanisms and processes that
enable each project’s continued assembled existence. The concept of the
assemblage allows us to grasp mass digitization as comprised of ephemeral
projects that are uncertain by nature, and sometimes even made up of
contradictory components.62 It also allows us to recognize that they are more
than mere networks: while ephemeral and networked, something enables them to
cohere. Bruno Latour writes, “Groups are not silent things, but rather the
provisional product of a constant uproar made by the millions of contradictory
voices about what is a group and who pertains to what.”63 It is the “taming
and constraining of this multivocality,” in particular by communities of
knowledge and everyday practices, that enables something like mass
digitization to cohere as an assemblage.64 This book is, among other things,
about those communities and practices, and the politics they produce and are
produced by. In particular, it addresses the politics of mass digitization as
an infrapolitical activity that retreats into, and emanates from, digital
infrastructures and the network effects they produce.

## Politics in Mass Digitization: Infrastructure and Infrapolitics

If the concept of “assemblage” allows us to see the relational set-up of mass
digitization, it also allows us to inquire into its political infrastructures.
In political terms, assemblage thinking is partly driven by dissatisfaction
with state-centric dominant ontologies, including reified units such as state,
society, or capitalism, and the unilinear focus on state-centric politics over
other forms of politics.65 The assemblage perspective is therefore especially
useful for understanding the politics of late-sovereign and late-capitalist
data projects such as mass digitization. As we will see in part 2, the
epistemic frame of sovereignty continues to offer an organizing frame for the
constitution and regulation of mass digitization and the virtues associated
with it (such as national representation and citizen engagement). However, at
the same time, mass digitization projects are in direct correspondence with
neoliberal values such as privatization, consumerism, globalization, and
acceleration, and its technological features allow for a complete
restructuring of the disciplinary spaces of libraries to form vaster and even
global scales of integration and economic organization on a multinational
stage.

Mass digitization is a concrete example of what cultural memory projects look
like in a “late-sovereign” age, where globalization tests the political and
symbolic authority of sovereign cultural memory politics to its limits, while
sovereignty as an epistemic organizing principle for the politics of cultural
memory nonetheless persists.66 The politics of cultural memory, in particular
those practiced by cultural heritage institutions, often still cling to fixed
sovereign taxonomies and epistemic frameworks. This focus is partly determined
by their institutional anchoring in the framework of national cultural
policies. In mass digitization, however, the formal political apparatus of
cultural heritage institutions is adjoined by a politics that plays out in the
margins: in lobbies, software industries, universities, social media, etc.
Those evaluating mass digitization assemblages in macropolitical terms, that
is, those who are concerned with political categories, will glean little of
the real politics of mass digitization, since such politics at the margins
would escape this analytic matrix.67 Assemblage thinking, by contrast, allows
us to acknowledge the political mechanisms of mass digitization beyond
disciplinary regulatory models, in societies where “where forces … not
categories, clash.”68

As Ian Hacking and many others have noted, the capacious usage of the notion
of “politics” threatens to strip the word of meaning.69 But talk of a politics
of mass digitization is no conceptual gimmick, since what is taking place in
the construction and practice of mass digitization assemblages plainly is
political. The question, then, is how best to describe the politics at work in
mass digitization assemblages. The answer advanced by the present volume is to
think of the politics of mass digitization as “infrapolitics.”

The notion of infrapolitics has until now primarily and profoundly been
advanced as a concept of hidden dissent or contestation (Scott, 1990).70 This
volume suggests shifting the lens to focus on a different kind of
infrapolitics, however, one that not only takes the shape of resistance but
also of maintenance and conformity, since the story of mass digitization is
both the story of contestation _and_ the politics of mundane and standard-
seeking practices. 71 The infrapolitics of mass digitization is, then, a kind
of politics “premised not on a subject, but on the infra,” that is, the
“underlying rules of the world,” organized around glocal infrastructures.72
The infrapolitics of mass digitization is the building and living of
infrastructures, both as spaces of contestation and processes of
naturalization.

Geoffrey Bowker and Susan Leigh Star have argued that the establishment of
standards, categories, and infrastructures “should be recognized as the
significant site of political and ethical work that they are.”73 This applies
not least in the construction and development of knowledge infrastructures
such as mass digitization assemblages, structures that are upheld by
increasingly complex sets of protocols and standards. Attaching “politics” to
“infrastructure” endows the term—and hence mass digitization under this
rubric—with a distinct organizational form that connects various stages and
levels of politics, as well as a distinct temporality that relates mass
digitization to the forces and ideas of industrialization and globalization.

The notion of infrastructure has a surprisingly brief etymology. It first
entered the French language in 1875 in relation to the excavation of
railways.74 Over the following decades, it primarily designated fixed
installations designed to facilitate and foster mobility. It did not enter
English vocabulary until 1927, and as late as 1951, the word was still
described by English sources as “new” (OED).75 When NATO adopted the term in
the 1950s, it gained a military tinge. Since then, “infrastructure” has
proliferated into ever more contexts and disciplines, becoming a “plastic
word”76 often used to signify any vital and widely shared human-constructed
resource.77

What makes infrastructures central for understanding the politics of mass
digitization? Primarily, they are crucial to understanding how industrialism
has affected the ways in which we organize and engage with knowledge, but the
politics of infrastructures are also becoming increasingly significant in the
late-sovereign, late-capitalist landscape.

The infrastructures of mass digitization mediate, combine, connect, and
converge upon different institutions, social networks, and devices, augmenting
the actors that take part in them with new agential possibilities by expanding
the radius of their action, strengthening and prolonging the reach of their
performance, and setting them free for other activities through their
accelerating effects, time often reinvested in other infrastructures, such as,
for instance, social media activities. The infrastructures of mass
digitization also increase the demand for globalization and mobility, since
they expand the radius of using/reading/working.

The infrastructures of mass digitization are thus media of polities and
politics, at times visible and at others barely legible or felt, and home both
to dissent as well as to standardizing measures. These include legal
infrastructures such as copyright, privacy, and trade law; material
infrastructures such as books, wires, scanners, screens, server parks, and
shelving systems; disciplinary infrastructures such as metadata, knowledge
organization, and standards; cultural infrastructures such as algorithms,
searching, reading, and downloading; societal infrastructures such as the
realms of the public and private, national and global. These infrastructures
are, depending, both the prerequisites for and the results of interactions
between the spatial, temporal, and social classes that take part in the
construction of mass digitization. The infrapolitics of mass digitization is
thus geared toward both interoperability and standardization, as well as
toward variation.78

Often when thinking of infrastructures, we conceive of them in terms of
durability and stability. Yet, while some infrastructures, such as railways
and Internet cables, are fairly solid and rigid constructions, others—such as
semantic links, time-limited contracts, and research projects—are more
contingent entities which operate not as “fully coherent, deliberately
engineered, end-to-end processes,” but rather as morphous contingent
assemblages, as “ecologies or complex adaptive systems” consisting of
“numerous systems, each with unique origins and goals, which are made to
interoperate by means of standards, socket layers, social practices, norms,
and individual behaviors that smooth out the connections among them.”79 This
contingency has direct implications for infrapolitics, which become equally
flexible and adaptive. These characteristics endow mass digitization
infrastructures with vulnerabilities but also with tremendous cultural power,
allowing them to distribute agency, and to create and facilitate new forms of
sociality and culture.

Building mass digitization infrastructures is a costly endeavor, and hence
mass digitization infrastructures are often backed by public-private
partnerships. Indeed infrastructures—and mass digitization infrastructures are
no exceptions—are often so costly that a certain mixture of political or
individual megalomania, state reach, and private capital is present in their
construction.80 This mixed foundation means that a lot of the political
decisions regarding mass digitization literally take place _beneath_ the radar
of “the representative institutions of the political system of nation-states,”
while also more or less aggressively filling out “gaps” in nation-state
systems, and even creating transnational zones with their own policies. 81
Hence the notion of “infra”: the infrapolitics of mass digitization hover at a
frequency that lies _below_ and beyond formal sovereign state apparatus,
organized, as they are, around glocal—and often private or privatized—material
and social infrastructures.

While distinct from the formalized sovereign political system, infrapolitical
assemblages nevertheless often perform as late-sovereign actors by engaging in
various forms of “sovereignty games.”82 Take Google, for instance, a private
corporation that often defines itself as at odds with state practice, yet also
often more or less informally meets with state leaders, engages in diplomatic
discussions, and enters into agreements with state agencies and local
political councils. The infrapolitical forces of Google in these sovereignty
games can on the one hand exert political pressure on states—for instance in
the name of civic freedom—but in Google’s embrace of politics, its
infrapolitical forces can on the other hand also squeeze the life out of
existing parliamentary ways, promoting instead various forms of apolitical or
libertarian modes of life. The infrapolitical apparatus thus stands apart from
more formalized politics, not only in terms of political arena, but also the
constraints that are placed upon them in the form, for instance, of public
accountability.83 What is described here can in general terms be called the
infrapolitics of neoliberalism, whose scenery consists of lobby rooms, policy-
making headquarters, financial zones, public-private spheres, and is populated
by lobbyists, bureaucrats, lawyers, and CEOs.

But the infrapolitical dynamics of mass digitization also operate in more
mundane and less obvious settings, such as software design offices and
standardization agencies, and are enacted by engineers, statisticians,
designers, and even users. Infrastructures are—increasingly—essential parts of
our everyday lives, not only in mass digitization contexts, but in all walks
of life, from file formats and software programs to converging transportation
systems, payment systems, and knowledge infrastructures. Yet, what is most
significant about the majority of infrapolitical institutions is that they are
so mundane; if we notice them at all, they appear to us as boring “lists of
numbers and technical specifications.”84 And their maintenance and
construction often occurs “behind the scenes.”85 There is a politics to these
naturalizing processes, since they influence and frame our moral, scientific,
and aesthetic choices. This is to say that these kinds of infrapolitical
activities often retire or withdraw into a kind of self-evidence in which the
values, choices, and influences of infrastructures are taken for granted and
accorded a kind of obviousness, which is universally accepted. It is therefore
all the more “politically and ethically crucial”86 to recognize the
infrapolitics of mass digitization, not only as contestation and privatized
power games, but also as a mode of existence that values professionalized
standardization measures and mundane routines, not least because these
infrapolitical modes of existence often outlast their material circumstances
(“software outlasts hardware” as John Durham Peters notes).87 In sum,
infrastructures and the infrapolitics they produce yield subtle but
significant world-making powers.

## Power in Mass Digitization

If mass digitization is a product of a particular social configuration and
political infrastructure, it is also, ultimately, a site and an instrument of
power. In a sense, mass digitization is an event that stages a fundamental
confrontation between state and corporate power, while pointing to the
reconfigurations of both as they become increasingly embedded in digital
infrastructures. For instance, such confrontation takes place at the
negotiating table, where cultural heritage directors face the seductive and
awe-inspiring riches of Silicon Valley, as well as its overwhelmingly
intricate contractual layouts and its intimidating entourage of lawyers.
Confrontation also takes place at the level of infrastructural ideology, in
the meeting between twentieth-century standardization ideals and the playful
and flexible network dynamics of the twenty-first century, as seen for
instance in the conjunction of institutionally fixed taxonomies and
algorithmic retrieval systems that include feedback mechanisms. And it takes
place at the level of users, as they experience a gain in some powers and the
loss of others in their identity transition from national patrons of cultural
memory institutions to globalized users of mass digitization assemblages.

These transformations are partly the results of society’s increasing reliance
on network power and its effects. Political theorists Michael Hardt and
Antonio Negri suggested almost two decades ago that among other things, global
digital systems enabled a shift in power infrastructures from robust national
economies and core industrial sectors to interactive networks and flexible
accumulation, creating a “form of network power, which requires the wide
collaboration of dominant nation-states, major corporations, supra-national
economic and political institutions, various NGOs, media conglomerates and a
series of other powers.”88 From this landscape, according to their argument,
emerged a new system of power in which morphing networks took precedence over
reliable blocs. Hardt and Negri’s diagnosis was one of several similar
arguments across the political spectrum that were formed within such a short
interval that “the network” arguably became the “defining concept of our
epoch.”89 Within this new epoch, the old centralized blocs of power crumbled
to make room for new forms of decentralized “bastard” power phenomena, such as
the extensive corporate/state mass surveillance systems revealed by Edward
Snowden and others, and new forms of human rights such as “the right to be
forgotten,” a right for which a more appropriate name would be “the right to
not be found by Google.”90 Network power and network effects are therefore
central to understanding how mass digitization assemblages operate, and why
some mass digitization assemblages are more powerful than others.

The power dynamics we find in Google Books, for instance, are directly related
to the ways in which digital technologies harness network effects: the power
of Google Books grows exponentially as its network expands.91 Indeed, as Siva
Vaidhyanathan noted in his critical work on Google’s role in society, what he
referred to as the “Googlization of books” was ultimately deeply intertwined
with the “Googlization of everything.”92 The networks of Google thus weren’t
external to both the success and the challenges of Google, but deeply endemic
to it, from portals and ranking systems to anchoring (elite) institutions, and
so on. The better Google Books becomes at harnessing network effects, the more
fundamental its influence is in the digital sphere. And Google Books is very
good at harnessing digital network power. Indeed, Google Books reached its
“tipping point” almost before it launched: it had by then already attracted so
many stakeholders that its mere existence decreased the power of any competing
entities—and the fact that its heavy user traffic is embedded in Google only
strengthened its network effects. Google Books’s tipping point tells us little
about its quality in an abstract sense: “tipping points” are more often
attained by proprietary measures, lobbying, expansion, and most typically by a
mixture of all of the above, than by sheer quality.93 This explains not only
the success of Google Books, but also its traction with even its critics:
although Google Books was initially criticized heavily for its poor imagery
and faulty metadata,94 its possible harmful impact on the public sphere,95 and
later, over privacy concerns,96 it had already created a power hub to which,
although they could have navigated around it, masses of people were
nevertheless increasingly drawn.

Network power is endemic not only to concrete digital networks, but also to
globalization at large as a process that simultaneously gives rise to feelings
of freedom of choice and loss of choice.97 Mass digitization assemblages, and
their globalization of knowledge infrastructures, thus crystalize the more
general tendencies of globalization as a process in which people participate
by choice, but not necessarily voluntarily; one in which we are increasingly
pushed into a game of social coordination, where common standards allow more
effective coordination yet also entrap us in their pull for convergence.
Standardization is therefore a key technique of network power: on the one
hand, standardization is linked with globalization (and various neoliberal
regimes) and the attendant widespread contraction of the state, while on the
other hand, standardization implies a reconfiguration of everyday life.98
Standards allow for both minute data analytics and overarching political
systems that “govern at a distance.”99 Standardization understood in this way
is thus a mode of capturing, conceptualizing, and configuring reality, rather
than simply an economic instrument or lubricant. In a sense, standardization
could even be said to be habit forming: through standardization, “inventions
become commonplace, novelties become mundane, and the local becomes
universal.”100

To be sure, standardization has long been a crucial tool of world-making
power, spanning both the early and late-capitalist eras.101 “Standard time,”
as John Durham Peters notes, “is a sine qua non for international
capitalism.”102 Without the standardized infrastructure of time there would be
no global transportation networks, no global trade channels, and no global
communication networks. Indeed, globalization is premised on standardization
processes.

What kind of standardization processes do we find, then, in mass digitization
assemblages? Internet use alone involves direct engagement with hundreds of
global standards, from Bluetooth to Wi-Fi standards, from protocol standards
to file standards such as Word and MP4 and HTTP.103 Moreover, mass
digitization assemblages confront users with a series of additional standards,
from cultural standards of tagging to technical standards of interoperability,
such as the European Data Model (EDM) and Google’s schema.org, or legal
standards such as copyright and privacy regulations. Yet, while these
standards share affinities with the standardization processes of
industrialization, in many respects they also deviate from them. Instead, we
experience in mass digitization “a new form of standardization,”104 in which
differentiation and flexibility gain increasing influence without, however,
dispensing with standardization processes.

Today’s standardization is increasingly coupled with demands for flexibility
and interoperability. Flexibility, as Joyce Kolko has shown, is a term that
gained traction in the 1970s, when it was employed to describe putative
solutions to the problems of Fordism.105 It was seen as an antidote to Fordist
“rigidity”—a serious offense in the neoliberal regime. Thus, while the digital
networks underlying mass digitization are geared toward standardization and
expansion, since “information technology rewards scale, but only to the extent
that practices are standardized,”106 they are also becoming increasingly
flexible, since too-rigid standards hinder network effects, that is, the
growth of additional networks. This is one reason why mass digitization
assemblages increasingly and intentionally break down the so-called “silo”
thinking of cultural memory institutions, and implement standard flexibility
and interoperability to increase their range.107 One area of such
reconfiguration in mass digitization is the taxonomic field, where stable
institutional taxonomic structures are converted to new flexible modes of
knowledge organization like linked data.108 Linked data can connect cultural
memory artifacts as well as metadata in new ways, and the move from a cultural
memory web of interlinked documents to a cultural memory web of interlinked
data can potentially “amplify the impact of the work of libraries and
archives.”109 However, in order to work effectively, linked data demands
standards and shared protocols.

Flexibility allows the user a freer range of actions, and thus potentially
also the possibility of innovation. These affordances often translate into
user freedom or empowerment. Yet flexibility does not necessarily equal
fundamental user autonomy or control. On the contrary, flexibility is often
achieved through decomposition, modularization, and black-boxing, allowing
some components to remain stable while others are changed without implications
for the rest of the system.110 These components are made “fluid” in the sense
that they are dispersed of clear boundaries and allowed multiple identities,
and in that they enable continuity and dissolution.

While these new flexible standard-setting mechanisms are often localized in
national and subnational settings, they are also globalized systems “oriented
towards global agendas and systems.”111 Indeed, they are “glocal”
configurations with digital networks at their cores. The increasing
significance of these glocal configurations has not only cultural but also
democratic consequences, since they often leave users powerless when it comes
to influencing their cores.112 This more fundamental problematic also pertains
to mass digitization, a phenomenon that operates in an environment that
constructs and encourages less Habermasian public spheres than “relations of
sociability,” from which “aggregate outcomes emerge not from an act of
collective decision-making, but through the accumulation of decentralized,
individual decisions that, taken together, nonetheless conduce to a
circumstance that affects the entire group.”113 For example, despite the
flexibility Google Books allows us in terms of search and correlation, we have
very little sway over its construction, even though we arguably influence its
dynamics. The limitations of our influence on the cores of mass digitization
assemblages have implications not only for how we conceive of institutional
power, but also for our own power within these matrixes.

## Notes

1. Borghi 2012, 420. 2. Latour 2008. 3. For more on this, see Hicks 2018;
Abbate 2012; Ensmenger 2012. In the case of libraries, (white) women still
make out the majority of the workforce, but there is a disproportionate amount
of men in senior positions, in comparison with their overall representation;
see, for example, Schonfeld and Sweeney 2017. 4. Meckler 1982. 5. Otlet and
Rayward 1990, chaps. 6 and 15. 6. For a historical and contemporary overview
over some milestones in the use of microfilms in a library context, see Canepi
et al. 2013, specifically “Historic Overview.” See also chap. 10 in Baker
2002. 7. Pfanner 2012. 8.
. 9. Medak et al.
2016. 10. Michael S. Hart, “The History and Philosophy of Project Gutenberg,”
Project Gutenberg, August 1992,
.
11. Ibid. 12. . 13. Ibid. 14. Bruno Delorme,
“Digitization at the Bibliotheque Nationale De France, Including an Interview
with Bruno Delorme,” _Serials_ 24 (3) (2011): 261–265. 15. Alain Giffard,
“Dilemmas of Digitization in Oxford,” _AlainGiffard’s Weblog_ , posted May 29,
2008, in-oxford>. 16. Ibid. 17. Author’s interview with Alain Giffard, Paris, 2010.
18. Ibid. 19. Later, in 1997, François Mitterrand demanded that the digitized
books should be brought online, accessible as text from everywhere. This,
then, was what became known as Gallica, the digital library of BnF, which was
launched in 1997. Gallica contains documents primarily out of copyright from
the Middle Ages to the 1930s, with priority given to French-speaking culture,
hosting about 4 million documents. 20. Imerito 2009. 21. Ambati et al. 2006;
Chen 2005. 22. Ryan Singel, “Stop the Google Library, Net’s Librarian Says,”
_Wired_ , May 19, 2009, library-nets-librarian-says>. 23. Alfred P. Sloan Foundation, Annual Report,
2006,
.
24. Leetaru 2008. 25. Amazon was also a major player in the early years of
mass digitization. In 2003 they gave access to a digital archive of more than
120,000 books with the professed goal of adding Amazon’s multimillion-title
catalog in the following years. As with all other mass digitization
initiatives, Jeff Bezos faced a series of copyright and technological
challenges. He met these with legal rhetorical ingenuity and the technical
skills of Udi Manber, who later became the lead engineer with Google, see, for
example, Wolf 2003. 26. Leetaru 2008. 27. John Markoff, “The Coming Search
Wars,” _New York Times_ , February 1, 2004,
. 28.
Google press release, “Google Checks out Library Books,” December 14, 2004,
.
29. Vise and Malseed 2005, chap. 21. 30. Auletta 2009, 96. 31. Johann Wolfgang
Goethe, _Sprüche in Prosa_ , “Werke” (Weimer edition), vol. 42, pt. 2, 141;
cited in Cassirer 1944. 32. Philip Jones, “Writ to the Future,” _The
Bookseller_ , October 22, 2015, future-315153>. 33. “Jacques Chirac donne l’impulsion à la création d’une
bibliothèque numérique,” _Le Monde_ , March 16, 2005,
donne-l-impulsion-a-la-creation-d-une-bibliotheque-
numerique_401857_3246.html>. 34. “An overwhelming American dominance in
defining future generations’ conception about the world” (author’s own
translation). Ibid. 35. Labi 2005; “The worst scenario we could achieve would
be that we had two big digital libraries that don’t communicate. The idea is
not to do the same thing, so maybe we could cooperate, I don’t know. Frankly,
I’m not sure they would be interested in digitizing our patrimony. The idea is
to bring something that is complementary, to bring diversity. But this doesn’t
mean that Google is an enemy of diversity.” 36. Chrisafis 2008. 37. Béquet
2009. For more on the political potential of archives, see Foucault 2002;
Derrida 1996; and Tygstrup 2014. 38. “Comme vous soulignez, nos bibliothèques
et nos archives contiennent la mémoire de nos culture européenne et de
société. La numérisation de leur collection—manuscrits, livres, images et
sons—constitue un défi culturel et économique auquel il serait bon que
l’Europe réponde de manière concertée.” (As you point out, our libraries and
archives contain the memory of our European culture and society. Digitization
of their collections—manuscripts, books, images, and sounds—is a cultural and
economic challenge it would be good for Europe to meets in a concerted
manner.) Manuel Barroso, open letter to Jacques Chirac, July 7, 2007,
[http://www.peps.cfwb.be/index.php?eID=tx_nawsecuredl&u=0&file=fileadmin/sites/numpat/upload/numpat_super_editor/numpat_editor/documents/Europe/Bibliotheques_numeriques/2005.07.07reponse_de_la_Commission_europeenne.pdf&hash=fe7d7c5faf2d7befd0894fd998abffdf101eecf1](http://www.peps.cfwb.be/index.php?eID=tx_nawsecuredl&u=0&file=fileadmin/sites/numpat/upload/numpat_super_editor/numpat_editor/documents/Europe/Bibliotheques_numeriques/2005.07.07reponse_de_la_Commission_europeenne.pdf&hash=fe7d7c5faf2d7befd0894fd998abffdf101eecf1).
39. Jøsevold 2016. 40. Janssen 2011. 41. Robert Darnton, “Google’s Loss: The
Public’s Gain,” _New York Review of Books_ , April 28, 2011,
. 42.
Palfrey 2015, __ 104. 43. See, for example, DPLA’s Public Library
Partnership’s Project, partnerships>. 44. Karaganis, 2018. 45. Sassen 2008, 3. 46. Coyle 2006; Borghi
and Karapapa, _Copyright and Mass Digitization_ ; Patra, Kumar, and Pani,
_Progressive Trends in Electronic Resource Management in Libraries_. 47.
Borghi 2012. 48. Beagle et al. 2003; Lavoie and Dempsey 2004; Courant 2006;
Earnshaw and Vince 2007; Rieger 2008; Leetaru 2008; Deegan and Sutherland
2009; Conway 2010; Samuelson 2014. 49. The earliest textual reference to the
mass digitization of books dates to the early 1990s. Richard de Gennaro,
Librarian of Harvard College, in a panel on funding strategies, argued that an
existing preservation program called “brittle books” should take precedence
over other preservation strategies such as mass deacidification; see Sparks,
_A Roundtable on Mass Deacidification_ , 46. Later the word began to attain
the sense we recognize today, as referring to digitization on a large scale.
In 2010 a new word popped up, “ultramass digitization,” a concept used to
describe the efforts of Google vis-à-vis more modest large-scale digitization
projects; see Greene 2010 _._ 50. Kevin Kelly, “Scan This Book!,” _New York
Times_ , May 14, 2006, ; Hall 2008; Darnton 2009;
Palfrey 2015. 51. As Alain Giffard notes, “I am not very confident with the
programs of digitization full of technical and economical considerations, but
curiously silent on the intellectual aspects” (Alain Giffard, “Dilemmas of
Digitization in Oxford,” _AlainGiffard’s Weblog_ , posted May 29, 2008,
oxford>). 52. Tiffen 2007. 344. See also Peatling 2004. 53. Sassen 2008. 54.
See _The Authors Guild et al. vs. Google, Inc._ , Amended Settlement Agreement
05 CV 8136, United States District Court, Southern District of New York,
(2009) sec 7(2)(d) (research corpus), sec. 1.91, 14. 55. Informational
capitalism is a variant of late capitalism, which is based on cognitive,
communicative, and cooperative labor. See Christian Fuchs, _Digital Labour and
Karl Marx_ (New York: Routledge, 2014), 135–152. 56. Miksa 1983, 93. 57.
Midbon 1980. 58. Said 1983, 237. 59. For example, the diverse body of
scholarship that employed the notion of “assemblage” as a heuristic and/or
ontological device for grasping and formulating these changing relations of
power and control; in sociology: Haggerty and Ericson 2000; Rabinow 2003; Ong
and Collier 2005; Callon et al. 2016; in geography: Anderson and McFarlane
2011, 124–127; in philosophy: Deleuze and Guattari 1987; DeLanda 2006; in
cultural studies: Puar 2007; in political science: Sassen 2008. The
theoretical scope of these works ranged from close readings of and ontological
alignments with Deleuze and Guattari’s work (e.g., DeLanda), to more
straightforward descriptive employments of the term as outlined in the OED
(e.g., Sassen). What the various approaches held in common was the effort to
steer readers away from thinking in terms of essences and stability toward
thinking about more complex and unstable structures. Indeed, the “assemblage”
seems to have become a prescriptive as much as a diagnostic tool (Galloway
2013b; Weizman 2006). 60. Deleuze 1997; Foucault 2009; Hardt and Negri 2007.
61. DeLanda 2006; Paul Rabinow, “Collaborations, Concepts, Assemblages,” in
Rabinow and Foucault 2011, 113–126, at 123. 62. Latour 2005, __ 28. 63. Ibid.,
35. 64. Tim Stevens, _Cyber Security and the Politics of Time_ (Cambridge:
Cambridge University Press, 2015), 33. 65. Abrahamsen and Williams 2011. 66.
Walker 2003. 67. Deleuze and Guattari 1987, 116. 68. Parisi 2004, 37. 69.
Hacking 1995, 210. 70. Scott 2009. In James C. Scott’s formulation,
infrapolitics is a form of micropolitics, that is, the term refers to
political acts that evade the formal political apparatus. This understanding
was later taken up by Robin D. G. Kelley and Alberto Moreires, and more
recently by Stevphen Shukaitis and Angela Mitropolous. See Kelley 1994;
Shukaitis 2009; Mitropoulos 2012; Alterbo Moreiras, _Infrapolitics: the
Project and Its Politics. Allegory and Denarrativization. A Note on
Posthegemony_. eScholarship, University of California, 2015. 71. James C.
Scott also concedes as much when he briefly links his notion of infrapolitics
to infrastructure, as the “cultural and structural underpinning of the more
visible political action on which our attention has generally been focused”;
Scott 2009, 184. 72. Mitropoulos 2012, 115. 73. Bowker and Star 1999, 319. 74.
Centre National de Ressource Textuelle et Lexicales,
. 75. For an English
etymological examination, see also Batt 1984, 1–6. 76. This is on account of
their malleability and the uncanny way they are used to fit every
circumstance. For more on the potentials and problems of plastic words, see
Pörksen 1995. 77. Edwards 2003, 186–187. 78. Mitropoulos 2012, 117. 79.
Edwards et al. 2012. 80. Peters 2015, at 31. 81. Beck 1996, 1–32, at 18;
Easterling 2014. 82. Adler-Nissen and Gammeltoft-Hansen 2008. 83. Holzer and
Mads 2003. 84. Star 1999, 377. 85. Ibid. 86. Bowker and Star 1999, 326. 87.
Peters 2015, 35. 88. Hardt and Negri 2009, 205. 89. Chun 2017. 90. As argued
by John Naughton at the _Negotiating Cultural Rights_ conference, National
Museum, Copenhagen, Denmark, November 13–14, 2015,
.
91. The “tipping point” is a metaphor for sudden change first introduced by
Morton Grodzins in 1960, later used by sociologists such as Thomas Schelling
(for explaining demographic changes in mixed-race neighborhoods), before
becoming more generally familiar in urbanist studies (used by Saskia Sassen,
for instance, in her analysis of global cities), and finally popularized by
mass psychologists and trend analysts such as Malcolm Gladwell, in his
bestseller of that name; see Gladwell 2000. 92. “Those of us who take
liberalism and Enlightenment values seriously often quote Sir Francis Bacon’s
aphorism that ‘knowledge is power.’ But, as the historian Stephen Gaukroger
argues, this is not a claim about knowledge: it is a claim about power.
‘Knowledge plays a hitherto unrecognized role in power,’ Gaukroger writes.
‘The model is not Plato but Machiavelli.’1 Knowledge, in other words, is an
instrument of the powerful. Access to knowledge gives access to that
instrument of power, but merely having knowledge or using it does not
automatically confer power. The powerful always have the ways and means to use
knowledge toward their own ends. … How can we connect the most people with the
best knowledge? Google, of course, offers answers to those questions. It’s up
to us to decide whether Google’s answers are good enough.” See Vaidhyanathan
2011, 149–150. 93. Easley and Kleinberg 2010, 528. 94. Duguid 2007; Geoffrey
Nunberg, “Google’s Book Search: A Disaster for Scholars,” _Chronicle of Higher
Education,_ August 31, 2009; _The Idea of Order: Transforming Research
Collections for 21st Century Scholarship_ (Washington, DC: Council on Library
and Information Resources, 2010), 106–115. 95. Robert Darnton, “Google’s Loss:
The Public’s Gain,” _New York Review of Books_ , April 28, 2011,
. 96.
Jones and Janes 2010. 97. David S. Grewal, _Network Power: The Social Dynamics
of Globalization_ (New Haven: Yale University Press, 2008). 98. Higgins and
Larner, _Calculating the Social: Standards and the Reconfiguration of
Governing_ (Basingstoke: Palgrave Macmillan, 2010). 99. Ponte, Gibbon, and
Vestergaard 2011; Gibbon and Henriksen 2012. 100. Russell 2014. See also Wendy
Chun on the correlation between habit and standardization: Chun 2017. 101.
Busch 2011. 102. Peters 2015, 224. 103. DeNardis 2011. 104. Hall and Jameson
1990. 105. Kolko 1988. 106. Agre 2000. 107. For more on the importance of
standard flexibility in digital networks, see Paulheim 2015. 108. Linked data
captures the intellectual information users add to information resources when
they describe, annotate, organize, select, and use these resources, as well as
social information about their patterns of usage. On one hand, linked data
allows users and institutions to create taxonomic categories for works on a
par with cultural memory experts—and often in conflict with such experts—for
instance by linking classical nudes with porn; and on the other hand, it
allows users and institutions to harness social information about patterns of
use. Linked data has ideological and economic underpinnings as much as
technical ones. 109.  _The National Digital Platform: for Libraries, Archives
and Museums_ , 2015, report-national-digital-platform>. 110. Petter Nielsen and Ole Hanseth, “Fluid
Standards. A Case Study of a Norwegian Standard for Mobile Content Services,”
under review,
.
111. Sassen 2008, 3. 112. Grewal 2008. 113. Ibid., 9.

# II
Mapping Mass Digitization

# 2
The Trials, Tribulations, and Transformations of Google Books

## Introduction

In a 2004 article in the cultural theory journal _Critical Inquiry_ , book
historian Roger Chartier argued that the electronic world had created a triple
rupture in the world of text: by providing new techniques for inscribing and
disseminating the written word, by inspiring new relationships with texts, and
by imposing new forms of organization onto them. Indeed, Chartier foresaw that
“the originality and the importance of the digital revolution must therefore
not be underestimated insofar as it forces the contemporary reader to
abandon—consciously or not—the various legacies that formed it.”1 Chartier’s
premonition was inspired by the ripples that digitization was already
spreading across the sea of texts. People were increasingly writing and
distributing electronically, interacting with texts in new ways, and operating
and implementing new textual economies.2 These textual transformations __ gave
rise to a range of emotional reactions in readers and publishers, from
catastrophizing attititudes and pessimism about “the end of the book” to the
triumphalist mythologizing of liquid virtual books that were shedding their
analog ties like butterflies shedding their cocoons.

The most widely publicized mass digitization project to date, Google Books,
precipitated the entire emotional spectrum that could arise from these textual
transversals: from fears that control over culture was slipping from authors
and publishers into the hands of large tech companies, to hopeful ideas about
the democratizing potential of bringing knowledge that was once locked up in
dusty tomes at places like Harvard and Stanford, and to a utopian
mythologizing of the transcendent potential of mass digitization. Moreover,
Google Books also affected legal and professional transformations of the
infrastructural set-up of the book, creating new precedents and a new
professional ethos. The cultural, legal, and political significance of Google
Books, whether positive or negative, not only emphasizes its fundamental role
in shaping current knowledge landscapes, it also allows us to see Google Books
as a prism that reflects more general political tendencies toward
globalization, privatization, and digitization, such as modulations in
institutional infrastructures, legal landscapes, and aesthetic and political
conventions. But how did the unlikely marriage between a tech company and
cultural memory institutions even come about? Who drove it forward, and around
and within which infrastructures? And what kind of cultural memory politics
did it produce? The following sections of this chapter will address some of
these problematics.

## The New Librarians

It was in the midst of a turbulent restructuring of the world of text, in
October 2004 at the Frankfurt International Book Fair, that Larry Page and
Sergey Brin of Google announced the launch of Google Print, a cooperation
between Google and leading Anglophone publishers. Google Print, which later
became Google Partner Program, would significantly alter the landscape and
experience of cultural memory, as well as its regulatory infrastructures. A
decade later, the traditional practices of reading, and the guardianship of
text and cultural works, had acquired entirely new meanings. In October 2004,
however, the publishing world was still unaware of Google’s pending influence
on the institutional world of cultural memory. Indeed, at that time, Amazon’s
mounting dominance in the field of books, which began a decade earlier in
1995, appeared to pose much more significant implications. The majority of
publishers therefore greeted Google’s plans in Frankfurt as a welcome
alternative to Jeff Bezos’s growing online behemoth.

Larry Page and Sergey Brin withheld a few details from their announcement at
Frankfurt, however; Google’s digitization plans would involve not only
cooperation with publishers, but also with libraries. As such, what would
later become Google Books would in fact consist of two separate, yet
interrelated, programs: Google Print (which would later become Google Partner
Program) and Google Library Project. In all secrecy, Google had for many
months prior to the Frankfurt Book Fair worked with select libraries in the US
and the UK to digitize their holdings. And in December 2004 the true scope of
Google’s mass digitization plans were revealed: what Page and Brin were
building was the foundation of a groundbreaking cultural memory archive,
inspired by the myth of Alexandria.3 The invocation of Alexandria situated the
nascent Google Books project in a cultural schema that historicized the
project as a utopian, even moral and idealist, project that could finally,
thanks to technology, exceed existing human constraints—legal, political, and
physical.4

Google’s utopian discourse was not foreign to mass digitization enthusiasts.
Indeed, it was the _langue du jour_ underpinning most large-scale digitization
projects, a discourse nurtured and influenced by the seemingly borderless
infrastructure of the web itself (which was often referred to in
universalizing terms). 5 Yet, while the universalizing discourse of mass
digitization was familiar, it had until then seemed like aspirational talk at
best, and strategic policy talk in the face of limited public funding, complex
copyright landscapes, and lumbering infrastructures, at worst. Google,
however, faced the task with a fresh attitude of determination and a will to
disrupt, as well as a very different form of leverage in terms of
infrastructural set-up. Google was already the world’s preferred search
engine, having mastered the tactical skill of navigating its users through
increasingly complex information landscapes on the web, and harvesting their
metadata in the process to continuously improve Google’s feedback systems.
Essentially ever-larger amounts of information (understood here as “users”)
were passing through Google’s crawling engines, and as the masses of
information in Google’s server parks grew, so did their computational power.
Google Books, then, as opposed to most existing digitization projects, which
were conceived mainly in terms of “access,” was embedded in the larger system
of Google that understood the power and value of “feedback,” collecting
information and entering it into feedback loops between users, machines, and
engineers. Google also understood that information power didn’t necessarily
lie in owning all the information they gave access to, but rather in
controlling the informational processes themselves.

Yet, despite Google’s advances in information seeking behaviors, the idea of
Google Books appeared as an odd marriage. Why was a private company in Silicon
Valley, working in the futuristic and accelerating world of software and fluid
information streams, intent on partnering up with the slow-paced world of
cultural memory institutions, traditionally more concerned with the past?
Despite the apparent clash of temporal and cultural regimes, however, Google
was in fact returning home to its point of inception. Google was born of a
research project titled the Stanford Integrated Digital Library Project, which
was part of the NSF’s Digital Libraries Initiative (1994–1999). Larry Page and
Sergey Brin were students then, working on the Stanford component of this
project, intending to develop the base technologies required to overcome the
most critical barriers to effective digital libraries, of which there were
many.6 Page’s and Brin’s specific project, titled Google, was presented as a
technical solution to the increasing amount of information on the World Wide
Web.7 At Stanford, Larry Page also tried to facilitate a serious discussion of
mass digitization at Stanford, and of whether or not it was feasible. But his
ideas received little support, and he was forced to leave the idea on the
drawing board in favor of developing search technologies.8

In September 1998, Sergey Brin and Larry Page left the library project to
found Google as a company and became immersed in search engine technologies.
However, a few years later, Page resuscitated the idea of mass digitization as
a part of their larger self-professed goal to change the world of information
by increasing access, scaling the amount of information available, and
improving computational power. They convinced Eric Schmidt, the new CEO of
Google, that the mass digitization of cultural works made sense not only from
a information perspective, but also from a business perspective, since the
vast amounts of information Google could extract from books would improve
Google’s ability to deliver information that was hitherto lacking, and this
new content would eventually also result in an increase in traffic and clicks
on ads.9

## The Scaling Techniques of Mass Digitization

A series of experiments followed on how to best approach the daunting task.
The emergence and decay of these experiments highlight the ways in which mass
digitization assemblages consist not only of thoughts, ideals, and materials,
but also a series of cultural techniques that entwine temporality,
materiality, and even corporeality. This perspective on mass digitization
emphasizes the mixed nature of mass digitization assemblages: what at first
glance appears as a relatively straightforward story about new technical
inventions, at a closer look emerges as complex entanglements of human and
nonhuman actors, with implications not only for how we approach it as a legal-
technical entity but also an infrapolitical phenomenon. As the following
section shows, attending to the complex cultural techniques of mass
digitization (its “how”) enables us to see that its “minor” techniques are not
excluded from or irrelevant to, but rather are endemic to, larger questions of
the infrapolitics of digital capitalism. Thus, Google’s simple technique of
scaling scanning to make the digitization processes go faster becomes
entangled in the creation of new habits and techniques of acceleration and
rationalization that tie in with the politics of digital culture and digital
devices. The industrial scaling of mass digitization becomes a crucial part of
the industrial apparatus of big data, which provide new modes of inscription
for both individuals and digital industries that in turn can be capitalized on
via data-mining, just as it raises questions of digital labor and copyright.

Yet, what kinds of scaling techniques—and what kinds of investments—Google
would have to leverage to achieve its initial goals were still unclear to
Google in those early years. Larry Page and co-worker Marissa Mayer therefore
began to experiment with the best ways to proceed. First, they created a
makeshift scanning device, whereby Marissa Mayer would turn the page and Larry
Page would click the shutter of the camera, guided by the pace of a
metronome.10 These initial mass digitization experiments signaled the
industrial nature of the mass digitization process, providing a metronomic
rhythm governed by the implacable regularity of the machine, in addition to
the temporal horizon of eternity in cultural memory institutions (or at least
of material decay).11 After some experimentation with scale and time, Google
bought a consignment of books from a second-hand book store in Arizona. They
scanned them and subsequently experimented with how to best index these works
not only by using information from the book, but also by pulling data about
the books from various other sources on the web. These extractions allowed
them to calculate a work’s relevance and importance, for instance by looking
at the number of times it had been referred to.12

In 2004 Google was also granted patent rights to a scanner that would be able
to scan the pages of works without destroying them, and which would make them
searchable thanks to sophisticated 3D scanning and complex algorithms.13
Google’s new scanner used infrared camera technology that detected the three-
dimensional shape and angle of book pages when the book was placed in the
scanner. The information from the book was then transmitted to Optical
Character Recognition (OCR), which adjusted image focus and allowed the OCR
software to read images of curved surfaces more accurately.

![11404_002_fig_001.jpg](images/11404_002_fig_001.jpg)

Figure 2.1 François-Marie Lefevere and Marin Saric. “Detection of grooves in
scanned images.” U.S. Patent 7508978B1. Assigned to Google LLC.

These new scanning technologies allowed Google to unsettle the fixed content
of cultural works on an industrial scale and enter them into new distribution
systems. The untethering and circulation of text already existed, of course,
but now text would mutate on an industrial scale, bringing into coexistence a
multiplicity of archiving modes and textual accumulation. Indeed, Google’s
systematic scaling-up of already existing technologies on an industrial and
accelerated scale posed a new paradigm in mass digitization, to a much larger
extent than, for instance, inventions of new technologies.14 Thus, while
Google’s new book scanners did expand the possibilities of capturing
information, Google couldn’t solve the problem of automating the process of
turning the pages of the books. For that they had to hire human scanners who
were asked to manually turn pages. The work of these human scanners was
largely invisible to the public, who could only see the books magically
appearing online as the digital archive accumulated. The scanners nevertheless
left ghostly traces, in the form of scanning errors such as pink fingers and
missing and crumbled pages—visual traces that underlined the historically
crucial role of human labor in industrializing and automating processes.15
Indeed, the question of how to solve human errors in the book scanning process
led to a series of inventive systems, such as the patent granted to Google in
2009 (filed in 2003), which describes a system that would minimize scanning
errors with the help of music.16 Later, Google open sourced plans for a book
scanner named “Linear Book Scanner” that would turn the pages automatically
with the help of a vacuum cleaner and a cleverly designed sheet metal
structure, after passing them over two image sensors taken from a desktop
scanner.17

Eventually, after much experimentation, Google consolidated its mass
digitization efforts in collaboration with select libraries.18 While some
institutions immediately and enthusiastically welcomed Google’s aspirations as
aligning with their own mission to improve access to information, others were
more hesitant, an institutional vacillation that hinted ominously at
controversy to come. Some libraries, such as the University of Michigan,
greeted the initiative with enthusiasm, whereas others, such as the Library of
Congress, saw a red flag pop up: copyright, one of the most fundamental
elements in the rights of texts and authors.19 The Library of Congress
questioned whether it was legal to scan and index books without a rights
holder’s permission. Google, in response, argued that it was within the fair
use provisions of the law, but the argument was speculative in so far as there
was no precedent for what Google was going to do. While some universities
agreed with Google’s views on copyright and shared its desire to disrupt
existing copyright practices, others allowed Google to make digital copies of
their holdings (a precondition for creating an index of it). Hence, some
libraries gave full access, others allowed only the scanning of books in the
public domain (published before 1923), and still others denied access
altogether. While the reticence of libraries was scattered, it was also a
precursor of a much more zealous resistance to Google Books, an opposition
that was mounted by powerful voices in the cultural world, namely publishers
and authors, and other commercial infrastructures of cultural memory.

![11404_002_fig_002.jpg](images/11404_002_fig_002.jpg)

Figure 2.2 Joseph K. O’Sullivan, Alexander Proudfooot, and Christopher R.
Uhlik. “Pacing and error monitoring of manual page turning operator.” U.S.
Patent 7619784B1. Assigned to Google LLC, Google Technology Holdings LLC.

While Google’s announcement of its cooperation with publishers at the
Frankfurt Book Fair was received without drama—even welcomed by many—the
announcement of its cooperation with libraries a few months later caused a
commercial uproar. The most publicized point of contestation was the fact that
Google was now not only displaying books in cooperation with publishers, but
also building a library of its own, without remunerating publishers and
authors. Why would readers buy books if they could read them free online?
Moreover, the Authors Guild worried that Google’s digital library would
increase the risk of piracy. At a deeper level, the case also emphasized
authors’ and publishers’ desire to retain control over their copyrighted works
in the face of the threat that the Library Project (unlike the Partner
Program) was posing: Google was digitizing without the copyright holder’s
permission. Thus, to them, the Library Project fundamentally threatened their
copyrights and, on a more fundamental level, existing copyright systems. Both
factors, they argued, would make book buying a superfluous activity.20 The
harsher criticisms framed Google Books as a book thief rather than as a global
philanthropist.21 Google, on its behalf, launched a defense of their actions
based on the notion of “fair use,” which as the following section shows,
eventually became the fundamental legal question.

## Infrastructural Transformations

Google Books became the symbol of the painful confusion and territorial
battles that marred the publishing world as it underwent a transformation from
analog to digital. The mounting and diverse opposition to Google Books was
thus not an isolated affair, but rather a persistent symptom—increasingly loud
stress signals emitting from the infrastructural joints of the analog realm of
books as it buckled under the strain of digital logic. As media theorist John
Durham Peters (drawing on media theorist Harold Innis) notes, the history of
media is also an “occupational history” that tells the tales of craftspeople
mastering medium-specific skills tactically battling for monopolies of
knowledge and guarding their access.22 And in the occupational history of
Google Books, the craftspeople of the printed book were being challenged by a
new breed of artificers who were excelling not so much in how to print, which
book sellers to negotiate with, or how to sell books to people, but rather in
the medium-specific tactical skills of the digital, such as building software
and devising search technologies, skills they were leveraging to their own
gain to create new “monopolies of knowledge” in the process.

As previously mentioned, the concerns expressed by publishers and authors in
regards to remuneration was accompanied by a more abstract sense of a loss of
control over their works and how this loss of control would affect the
copyrights. These concerns did not arise out of thin air, but were part of a
more general discourse on digital information as something that _cannot_ be
secured and controlled in the same way as analog commodities can. Indeed, it
seemed that authors and publishers were part of a world entirely different
from Google Books: while publishers and authors were still living in and
defending a “regime of scarcity,” 23 Google Books, by contrast, was busy
building a “realm of plenitude and infinite replenishment.” As such, the clash
between the traditional infrastructures of the analog book and the new
infrastructures of Google Books was symptomatic of the underlying radical
reorganization of information from a state of trade and exchange to a state of
constant transmission and contagion.24

Foregrounding the fair use defense25, Google argued that the public benefits
of scanning outweighed the negative consequences for authors.26 Influential
legal scholars such as Lawrence Lessig, among others, supported this argument,
suggesting that inclusion in a search engine in a way that does not erode the
value of the book was of such societal importance that it should be deemed
legal.27 The copyright owners, however, insisted that the burden should be on
Google to request permission to scan each work.28

Google and copyright owners reached a proposed settlement on October 28, 2008.
The proposal would allow Google not only to continue its scanning activities
and to show free snippets online, but would also give Google exclusive rights
to sell digital copies of out-of-print books. In return, Google would provide
all libraries in the United States with one free subscription to the digital
database, but Google could also sell additional subscriptions. Moreover,
Google was to pay $125 million, part of which would go to the construction of
a Book Rights Registry that identified rights holders and handled payments to
lawyers.29 Yet before the settlement was even formally treated, a mounting
opposition to it was launched in public.

The proposed settlement was received with harsh words, for instance by
Internet archivist Brewster Kahle and legal scholar Lawrence Lessig, who
opposed the settlement with words ranging from “insanity” to “cultural
asphyxiation” and “information monopoly.”30 Privacy proponents also spoke out
against Google Books, bringing attention to the implications of Google being
able to follow and track reading habits, among other things.31 The
organization Privacy Authors, including writers such as Jonathan Lethem, Bruce
Schneier, and Michael Chabon, and publishers, argued that although Google
Books was an “extremely exciting” project, it failed in its current form to
protect the privacy of readers, thus creating a “real risk of disclosure” of
sensitive information to “prying governmental entities and private litigants,”
potentially giving rise to a “chilling effect,” hurting not only readers but
also authors and publishers, not least those writing about sensitive or
controversial topics.32 The Association of Libraries also raised a set of
concerns, such as the cost of library subscriptions and privacy.33 And most
predictably, companies such as Amazon and Microsoft, who also had a stake in
mass digitization, opposed the settlement; Microsoft even funded some nuanced
research efforts into its implications.34 Finally, and most damningly, the
Department of Justice decided to get involved with an antitrust argument.

By this point, opposition to the Google Books project, as it was outlined in
the proposed settlement, wasn’t only motivated by commercial concerns; it was
now also motivated by a public that framed Google’s mass digitization project
as a parasitical threat to the public sphere itself. The framing of Google as
a potential menace was a jarring image that stood in stark contrast to Larry
Page’s and Sergey Brin’s philanthropic attitudes and to Google’s famous “Don’t
be evil” slogan. The public reaction thus signaled a change in Google’s
reputation as the company metamorphosed in the public eye from a small
underdog company to a multinational corporation with a near-monopoly in the
search industry. Google’s initially inspiring approach to information as a
realm of plenitude now appeared in the public view more similar to the actions
of megalomaniac land-grabbers.

Google, however, while maintaining its universalizing mission regarding
information, also countered the accusations of monopoly building, arguing that
potential competitors could just step up, since nothing in the agreements
entered into by the libraries and Google “precludes any other company or
organization from pursuing their own similar effort.”35 Nevertheless Judge
Denny Chin denied the settlement in March 2011 with the following statement:
“The question presented is whether the ASA is fair, adequate, and reasonable.
I conclude that it is not.”36 Google left the proposed settlement behind, and
appealed the decision of their initial case with new amicus briefs focusing on
their argument that book scanning was fair use. They argued that they were not
demanding exclusivity on the information they scanned, that they didn’t
prohibit other actors from digitizing the works they were digitizing, and that
their main goal was to enrich the public sphere with more information, not to
build an information monopoly. In July 2013 Judge Denny Chin issued a new
opinion confirming that Google Books was indeed fair use.37 Chin’s opinion was
later consolidated in a major victory for Google in 2015 when Judge Pierre
Leval in the Second Circuit Court legalized Google Books with the words
“Google’s unauthorized digitizing of copyright-protected works, creation of a
search functionality, and display of snippets from those works are non-
infringing fair uses.“38 Leval’s decision marked a new direction, not only for
Google Books, but also for mass digitization in general, as it signaled a
shift in cultural expectations about what it means to experience and
disseminate cultural artifacts.

Once again, the story of Google Books took a new turn. What was first
presented as a gift to cultural memory institutions and the public, and later
as theft from and threat to these same entities, on closer inspection revealed
itself as a much more complex circulatory system of expectations, promises,
risks, and blame. Google Books thus instigated a dynamic and forceful
connection between Google and cultural memory institutions, where the roles of
giver and receiver, and the first giver and second giver/returner, were
difficult to decode. Indeed, the binding nature of the relationship between
Google Books and cultural memory institutions proved to be much more complex
than the simple physical exchange of books and digital files. As the next
section outlines, this complex system of cultural production was held together
by contractual arrangement—central joints, as it were, connecting data and
works, public and private, local and global, in increasingly complex ways. For
Google Books, these contractual relations appear as the connective tissues
that make these assemblages possible, and which are therefore fundamental to
their affective dimensions.

## The Infrapolitics of Contract

In common parlance a contract is a legal tool that formalizes a “mutual
agreement between two or more parties that something shall be done or forborne
by one or both,” often enforceable by law.39 Contractual systems emerged with
the medieval merchant regime, and later evolved with classical liberalism into
an ideological revolt against paternalist systems as nothing less than
freedom, a legal construct that could destroy the sentimental bonds of
personal dependence.40 As the classic liberal social scientist William Graham
Sumner argued, “[c]ontract … is rational … realistic, cold, and matter-of-
fact.” The rational nature of contracts also affected their temporality, since
a contract endures only “so long as the reason for it endures,” and their
spatiality, relegating any form of sentiment from the public sphere to “the
sphere of private and personal relations.”41

Sentiments prevailed, however, as the contracts tying together Google and
cultural memory institutions emerged. Indeed, public and professional
evaluations of the agreements often took an affective, even sexualized, form.
The economist Paul Courant situated libraries “in bed with Google”42; library
consultant and media experts Jeff Ubois and Peter B. Kaufman recounted _how_
they got in bed with Google—“[w]e were approached singly, charmed in
confidence, the stranger was beguiling, and we embraced” 43; communication
scholar Evelyn Bottando announced that “libraries not only got in bed with
Google. They got married”44; and librarian Jessamyn West finally pondered on
the relationship ruins, “[s]till not sure, after all that, how we got this all
so wrong. Didn’t we both want the same thing? Maybe it really wasn’t us, it
was them. Most days it’s hard to remember what we saw in Google. Why did we
think we’d make good partners?”45

The evaluative discourse around Google Books dispels the idea of contracts as
dispassionate transactions for services and labor, showing rather that
contracts are infrapolitical apparatuses that give rise to emotions and
affect; and that, moreover, they are systems of doctrines, relations, and
social artifacts that organize around specific ideologies, temporalities,
materialities, and techniques.46 First and foremost, contracts give rise to
new kinds of infrastructures in the field of cultural memory: they mediate,
connect, and converge cultural memory institutions globally, giving rise to
new institutional networks, in some cases increasing globalization and
mobility for both users and objects, and in other cases restricting the same.
The Google Books contracts display both technical and symbolic aspects: as
technical artifacts they establish intricate frameworks of procedures,
commitments, rights, and incentives for governing the transactions of cultural
memory artifacts and their digitized copies. As symbolic artifacts they evoke
normative principles, expressing different measures of good will toward
libraries, but also—as all contracts do—introduce the possibility of distrust,
conflict and betrayal.47

Despite their centrality to mass digitization assemblages, and although some
of them have been made available to the public,48 the content of these
particular contracts still suffer from the epistemic gap incurred in practical
and symbolic form by Google’s Agreements and Non-Disclosure Agreements (NDA),
a kind of agreement most libraries are required to sign when entering the
agreement. Like all contracts, the individual contracts signed by the
partnership libraries vary in nature and have different implications. While
many of Google’s agreements may be publically available, they have often only
been made public through requests and transparency mechanisms such as the
Freedom of Information Act. As the Open Rights Alliance notes in their
publication of the agreement entered between the British Library and Google,
“We asked the British Library for a copy of the agreement with Google, which
was not uploaded to their transparency website with other similar contracts,
as it didn’t involve monetary exchange. This may be a loophole transparency
activists want to look at. After some toing and froing with the Freedom of
Information Act we got a copy.”49

While the culture of contractual secrecy is native to the business world, with
its safeguarding of business processes, and is easily navigated by business
partners, it is often opposed to the ethos of state-subsidized cultural
institutions who “draw their financial and moral support from a public that
expects transparency in their activities, ranging from their materials
acquisitions to their business deals.”50 For these reasons, library
organizations have recommended that nondisclosure agreements should be avoided
if possible, and minimized if they are necessary.51 Google, in response, noted
on its website that: “[t]hough not all of the library contracts have been made
public, we can say that all of them are non-exclusive, meaning that all of our
library partners are free to continue their own scanning projects or work with
others while they work with Google to digitize their books.”52

Regardless of their contractual content and later publication, the contracts
are a vital instrument in Google’s broader management of visibility. As Mikkel
Flyverbom, Clare Birchall, and others have argued, this practice of visibility
management—which they define as “the many ways in which organizations seek to
curate and control their presence, relations, and comprehension vis-à-vis
their surroundings” through practices of transparency, secrecy, opacity,
surveillance, and disclosure—is in the digital age a complex issue closely
tied to the question of governance and power. While each publication act may
serve to create an uncomplicated picture of transparency, it nevertheless
happens in a paradoxical global regulatory environment that on the one hand
encourages “sunshine” laws that demand that governments, corporations, and
civil-sector organizations provide access to information, yet on the other
hand also harbors regulatory agencies that seek mechanisms and rules by which
to keep information hidden. Thus, as Flyverbom et al. conclude, the “everyday
practices of organizing invariably implicate visibility management,” whose
valences are “attached to transparency and opacity” that are not simple and
straightforward, but rather remain “dependent upon the actor, the context, and
the purpose of organizations and individuals.”53

Steven Levy recounts how Google began its scanning operations in “near-total
stealth,” a “cloak-and-dagger” approach that stood in contrast to Google’s
public promotion of transparency as a new mode of existence. As Levy argues,
“[t]he secrecy was yet another expression of the paradox of a company that
sometimes embraced transparency and other times seemed to model itself on the
NSA.”54 Yet, while secrecy practices may have suited some of Google’s
operations, they sit much more uneasily with their book scanning programs: “If
Google had a more efficient way to scan books, sharing the improved techniques
could benefit the company in the long run—inevitably, much of the output would
find its way onto the web, bolstering Google’s indexes. But in this case,
paranoia and a focus on short-term gain kept the machines under wraps.”55 The
nondisclosure agreements show that while boundaries may be blurred between
Google Books and libraries, we may still identify different regulatory models
and modes of existence within their networks, including the explicit _library
ethos_ (in the Weberian sense of the term) of public access, not only to the
front end but also to some areas of the back end, and the business world’s
secrecy practices. 56

Entering into a mass digitization public-private partnership (PPP) with a
corporation such as Google is thus not only a logical and pragmatic next step
for cultural memory institutions, it is also a political step. As already
noted, Google Books, through its embedding in Google, injects cultural memory
objects into new economic and cultural infrastructures. These infrastructures
are governed less by the hierarchical world of curators, historians, and
politicians, and more by feedback networks of tech companies, users, and
algorithms. Moreover, they forge ever closer connections to data-driven market
logics, where computational rather than representational power counts. Mass
digitization PPPs such as Google Books are thus also symptoms of a much more
pervasive infrapolitical situation, in which cultural memory institutions are
increasingly forced to alter their identities from public caretakers of
cultural heritage to economic actors in the EU internal market, controlled by
the framework of competition law, time-limited contracts, and rules on state
aid.57 Moreover, mastering the rules of these new infrastructures is not
necessarily an easy feat for public institutions.58 Thus, while Google claims
to hold a core commitment regarding free digital access to information, and
while its financial apparatus could be construed as making Google an eligible
partner in accordance with the EU’s policy objectives toward furthering
public-private partnerships in Europe,59 it is nevertheless, as legal scholar
Maurizio Borghi notes, relevant to take into account Google’s previous
monopoly-building history.60

## The Politics of Google Books

A final aspect of Google Books relates to the universal aspiration of Google
Books’s collection, its infrapolitics, and what it empirically produces in
territorial terms. As this chapter’s previous sections have outlined, it was
an aspiration of Google Books to transcend the cultural and political
limitations of physical cultural memory collections by gathering the written
material of cultural memory institutions into one massive digitized
collection. Yet, while the collection spans millions of works in hundreds of
languages from hundreds of countries,61 it is also clear that even large-scale
mass digitization processes still entail procedures of selection on multiple
levels from libraries to works. These decisions produce a political reality
that in some respects reproduces and accentuates the existing politics of
cultural memory institutions in terms of territorial and class-based
representations, and in other respects give rise to new forms of cultural
memory politics that part ways with the political regimes of traditional
curatorial apparatuses.

One obvious area in which to examine the politics produced by the Google Books
assemblage is in the selection of libraries that Google chooses to partner
with.62 While the full list of Google Books partners is not disclosed on
Google’s own webpage, it is clear from the available list that, up to now,
Google Books has mainly partnered with “great libraries,” such as elite
university libraries and national libraries. The rationale for choosing these
libraries has no doubt been to partner up with cultural memory institutions
that preside over as much material as possible, and which are therefore able
to provide more pieces of the puzzle than, say, a small-town public library
that only presides over a fraction of their collections. Yet, while these
libraries provide Google Books with an impressive and extensive collection of
rare and valuable artifacts that give the impression of a near-universal
collection, they nevertheless also contain epistemological and historical
gaps. Historian and digital humanist Andrew Prescott notes, for example, the
limited collections of literature written by workers and other lower-class
people in the early eighteenth century in elite libraries. This institutional
lack creates a pre-filtered collection in Google Books, favoring “[t]hose
writers of working class origins who had a success story to report, who had
become distinguished statesmen, successful businessmen, religious leaders and
so on,” that is, the people who were “able to find commercial publishers who
were interested in their story.”63 Google’s decision to partner with elite
libraries thus inadvertently reproduces the class-based biases of analog
cultural memory institutions.

In addition to the reproduction of analog class-based bias in its digital
collection, the Google Books corpus also displays a genre bias, veering
heavily toward scientific publications. As mathematicians Eitan Pechenik et
al. show, the contents of the Google Books corpus in the period of the 1900s
is “increasingly dominated by scientific publications rather than popular
works,” and “even the first data set specifically labeled as fiction appears
to be saturated with medical literature.”64 The fact that Google Books is
constellated in such a manner thus challenges a “vast majority of existing
claims drawn from the Google Books corpus,” just as it points to the need “to
fully characterize the dynamics of the corpus before using these data sets to
draw broad conclusions about cultural and linguistic evolution.”65

Last but not least, Google Books’s collection still bespeaks its beginnings:
it still primarily covers Anglophone ground. There is hardly any literature
that reviews the geographic scope in Google Books, but existing work does
suggest that Google is still heavily oriented toward US-based libraries.66
This orientation does not necessarily give rise to an Anglophone linguistic
hegemony, as some have feared, since many of the Anglophone libraries hold
considerable collections of foreign language books. But it does invariably
limit its collections to the works in foreign languages that the elite
libraries deemed worthy of preserving. The gaps and biases of Google Books
reveal it to be less of a universal and monolithic collection, and more of an
impressive, but also specific and contingent, assemblage of works, texts, and
relations that is determined by the relations Google Books has entered into in
terms of class, discipline, and geographical scope.

Google Books is not only the result of selection processes on the level of
partnering institutions, but also on the level of organizational
infrastructure. While the infrastructures of Google Books in fact depart from
those of its parent company in many regards to avoid copyright infringement
charges, there is little doubt, however, that people working actively on
Google’s digitization activities (included here are both users and Google
employees) are also globally distributed in networked constellations. The
central organization for cultural digitization, the Google Cultural Institute,
is located in Paris, France. Yet the people affiliated with this hub are
working across several countries. Moreover, people working on various aspects
of Google Books, from marketing to language technology, to software
developments and manual scanning processes, are dispersed across the globe.
And it is perhaps in this way that we tend to think of Google in general—as a
networked global company—and for good reasons. Google has been operating
internationally almost for as long as it has been around. It has offices in
countries all over the globe, and works in numerous languages. Today it is one
of the most important global information institutions, and as more and more
people turn to Google for its services, Google also increasingly reflects
them—indeed they enter into a complex cognitive feedback mechanism system.
Google depends on the growing diversity of its “inhabitants” and on its
financial and cultural leverage on a global scale, and to this effect it is
continuously fine-tuning its glocalization strategies, blending the universal
and the particular. This glocal strategy does not necessarily create a
universal company, however; it would be more correct to say that Google’s
glocality brings the globe to Google, redefining it as an “American”
company.67 Hence, while there is little doubt that Google, and in effect
Google Books, increasingly tailors to specific consumers,68 and that this
tailoring allows for a more complex global representation generated by
feedback systems, Google’s core nevertheless remains lodged on American soil.
This is underlined by the fact that Google Books still effectively belongs to
US jurisdiction.69 Google Books is thus on the one hand a globalized company
in terms of both content and institutional framework; yet it also remains an
_American_ multinational corporation, constrained by US regulation and social
standards, and ultimately reinforcing the capacities of the American state.
While Google Books operates as a networked glocal project with universal
aspirations, then, it also remains fenced in by its legal and cultural
apparatuses.

In sum, just as a country’s regulatory and political apparatus affects the
politics of its cultural memory institutions in the analog world, so is the
politics of Google Books co-determined by the operations of Google. Thus,
curatorial choices are made not only on the basis of content, but also of the
location of server parks, existing company units, lobbying efforts, public
policy concerns, and so on. And the institutional identity of Google Books is
profoundly late-sovereign in this regard: on one hand it thrives on and
operates with horizontal network formations; on the other, it still takes into
account and has to operate with, and around, sovereign epistemologies and
political apparatuses. These vertical and horizontal lines ultimately rewire
the politics of cultural memory, shifting the stakes from sovereign
territorial possessions to more functional, complex, and effective means of
control.

## Notes

1. Chartier 2004. 2. As philosopher Jacques Derrida noted anecdotally on his
colleagues’ way of reading, “some of my American colleagues come along to
seminars or to lecture theaters with their little laptops. They don’t print
out; they read out directly, in public, from the screen. I saw it being done
as well at the Pompidou Center [in Paris] a few days ago. A friend was giving
a talk there on American photography. He had this little Macintosh laptop
there where he could see it, like a prompter: he pressed a button to scroll
down his text. This assumed a high degree of confidence in this strange
whisperer. I’m not yet at that point, but it does happen.” (Derrida 2005, 27).
3. As Ken Auletta recounts, Eric Schmidt remembers when Page surprised him in
the early 2000s by showing off a book scanner he had built which was inspired
by the great library of Alexandria, claiming that “We’re going to scan all the
books in the world,” and explaining that for search to be truly comprehensive
“it must include every book ever published.” Page literally wanted Google to
be a “super librarian” (Auletta 2009, __ 96). 4. Constraints of a physical
character (how to digitize and organize all this knowledge in physical form);
legal character (how to do it in a way that suspends existing regulation); and
political character (how to transgress territorial systems). 5. Take, for
instance, project Bibliotheca Universalis, comprising American, Japanese,
German, and British libraries among others, whose professed aim was “to
exploit existing digitization programs in order to … make the major works of
the world’s scientific and cultural heritage accessible to a vast public via
multimedia technologies, thus fostering … exchange of knowledge and dialogue
over national and international borders.” It was a joint project of the French
Ministry of Culture, the National Library of France, the Japanese National
Diet Library, the Library of Congress, the National Library of Canada,
Discoteca di Stato, Deutsche Bibliothek, and the British Library:
. The project took its name
from the groundbreaking Medieval publication _Bibliotecha Universalis_
(1545–1549), a four-volume alphabetical bibliography that listed all the known
books printed in Latin, Greek, or Hebrew. Obviously, the dream of the total
archive is not limited to the realm of cultural memory institutions, but has a
much longer and more generalized lineage; for a contemporary exploration of
these dreams see, for instance, issue six of _Limn Magazine_ , March 2016,
. 6. As the project noted in its research summary,
“One of these barriers is the heterogeneity of information and services.
Another impediment is the lack of powerful filtering mechanisms that let users
find truly valuable information. The continuous access to information is
restricted by the unavailability of library interfaces and tools that
effectively operate on portable devices. A fourth barrier is the lack of a
solid economic infrastructure that encourages providers to make information
available, and give users privacy guarantees”; Summary of the Stanford Digital
Library Technologies Project,
. 7. Brin and Page
1998. 8. Levy 2011, 347. 9. Levy 2011, 349. 10. Levy 2011, 349. 11. Young
1988. 12. They had a hard time, however, creating a new PageRank-like
algorithm for books; see Levy 2011, 349. 13. Google Inc., “Detection of
Grooves in Scanned Images,” March 24, 2009,
[https://www.google.ch/patents/US7508978?dq=Detection+Of+Grooves+In+Scanned+Images&hl=da&sa=X&ved=0ahUKEwjWqJbV3arMAhXRJSwKHVhBD0sQ6AEIHDAA](https://www.google.ch/patents/US7508978?dq=Detection+Of+Grooves+In+Scanned+Images&hl=da&sa=X&ved=0ahUKEwjWqJbV3arMAhXRJSwKHVhBD0sQ6AEIHDAA).
14. See, for example, Jeffrey Toobin. “Google’s Moon Shot,” _New Yorker_ ,
February 4, 2007, shot>. 15. Scanners whose ghostly traces are still found in digitized books
today are evidenced by a curious little blog collecting the artful mistakes of
scanners, _The Art of Google Books_ , .
For a more thorough and general introduction to the historical relationship
between humans and machines in labor processes, see Kang 2011. 16. The
abstract from the patent reads as follows: “Systems and methods for pacing and
error monitoring of a manual page turning operator of a system for capturing
images of a bound document are disclosed. The system includes a speaker for
playing music having a tempo and a controller for controlling the tempo based
on an imaging rate and/or an error rate. The operator is influenced by the
music tempo to capture images at a given rate. Alternative or in addition to
audio, error detection may be implemented using OCR to determine page numbers
to track page sequence and/or a sensor to detect errors such as object
intrusion in the image frame and insufficient light. The operator may be
alerted of an error with audio signals and signaled to turn back a certain
number of pages to be recaptured. When music is played, the tempo can be
adjusted in response to the error rate to reduce operator errors and increase
overall throughput of the image capturing system. The tempo may be limited to
a maximum tempo based on the maximum image capture rate.” See Google Inc.,
“Pacing and Error Monitoring of Manual Page Turning Operator,” November 17,
2009, . 17. Google, “linear-book-
scanner,” _Google Code Archive_ , August 22, 2012,
. 18. The libraries of
Harvard, the University of Michigan, Oxford, Stanford, and the New York Public
Library. 19. Levy 2011, 351. 20.  _The Authors Guild et al. vs. Google, Inc._
, Class Action Complaint 05 CV 8136, United States District Court, Southern
District of New York, September 20, 2005,
/settlement-resources.attachment/authors-
guild-v-google/Authors%20Guild%20v%20Google%2009202005.pdf>. 21. As the
Authors Guild notes, “The problem is that before Google created Book Search,
it digitized and made many digital copies of millions of copyrighted books,
which the company never paid for. It never even bought a single book. That, in
itself, was an act of theft. If you did it with a single book, you’d be
infringing.” Authors Guild v. Google: Questions and Answers,
. 22.
Peters 2015, 21. 23. Hayles 2005. 24. Purdon 2016, 4. 25. Fair use constitutes
an exception to the exclusive right of the copyright holder under the United
States Copyright Act; if the use of a copyright work is a “fair use,” no
permission is required. For a court to determine if a use of a copyright work
is fair use, four factors must be considered: (1) the purpose and character of
the use, including whether such use is of a commercial nature or is for
nonprofit educational purposes; (2) the nature of the copyrighted work; (3)
the amount and substantiality of the portion used in relation to the
copyrighted work as a whole; and (4) the effect of the use upon the potential
market for or value of the copyrighted work. 26. “Do you really want … the
whole world not to have access to human knowledge as contained in books,
because you really want opt out rather than opt in?” as quoted in Levy 2011,
360. 27. “It is an astonishing opportunity to revive our cultural past, and
make it accessible. Sure, Google will profit from it. Good for them. But if
the law requires Google (or anyone else) to ask permission before they make
knowledge available like this, then Google Print can’t exist” (Farhad Manjoo,
“Indexing the Planet: Throwing Google at the Book,” _Spiegel Online
International_ , November 9, 2005, /indexing-the-planet-throwing-google-at-the-book-a-383978.html>.) Technology
lawyer Jonathan Band also expressed his support: Jonathan Band, “The Google
Print Library Project: A Copyright Analysis,” _Journal of Internet Banking and
Commerce_ , December 2005, google-print-library-project-a-copyright-analysis.php?aid=38606>. 28.
According to Patricia Schroeder, the Association of American Publishers (AAP)
President, Google’s opt-out procedure “shifts the responsibility for
preventing infringement to the copyright owner rather than the user, turning
every principle of copyright law on its ear.” BBC News, “Google Pauses Online
Books Plan,” _BBC News_ , August 12, 2005,
. 29. Professor of law,
Pamela Samuelson, has conducted numerous progressive and detailed academic and
popular analyses of the legal implications of the copyright discussions; see,
for instance, Pamela Samuelson, “Why Is the Antitrust Division Investigating
the Google Book Search Settlement?,” _Huffington Post_ , September 19, 2009,
divi_b_258997.html>; Samuelson 2010; Samuelson 2011; Samuelson 2014. 30. Levy
2011, 362; Lessig 2010; Brewster Kahle, “How Google Threatens Books,”
_Washington Post_ , May 19, 2009, dyn/content/article/2009/05/18/AR2009051802637.html>. 31. EFF, “Google Book
Search Settlement and Reader Privacy,” Electronic Frontier Foundation, n.d.,
. 32.  _The Authors Guild et
al. vs. Google Inc_., 05 Civ. 8136-DC, United States Southern District of New
York, March 22, 2011,
[http://www.nysd.uscourts.gov/cases/show.php?db=special&id=115](http://www.nysd.uscourts.gov/cases/show.php?db=special&id=115).
33. Brief of Amicus Curiae, American Library Association et al. in relation to
_The Authors Guild et al. vs. Google Inc_., 05 Civ. 8136-DC, filed on August 1
2012,
.
34. Steven Levy, “Who’s Messing with the Google Books Settlement? Hint:
They’re in Redmond, Washington,” _Wired_ , March 3, 2009,
. 35. Sergey Brin, “A Library
to Last Forever,” _New York Times_ , October 8, 2009,
. 36.  _The Authors
Guild et al. vs. Google Inc_., 05 Civ. 8136-DC, United States Southern
District of New York, March 22, 2011,
[http://www.nysd.uscourts.gov/cases/show.php?db=special&id=115](http://www.nysd.uscourts.gov/cases/show.php?db=special&id=115).
37. “Google does, of course, benefit commercially in the sense that users are
drawn to the Google websites by the ability to search Google Books. While this
is a consideration to be acknowledged in weighing all the factors, even
assuming Google’s principal motivation is profit, the fact is that Google
Books serves several important educational purposes. Accordingly, I conclude
that the first factor strongly favors a finding of fair use.” _The Authors
Guild et al. vs. Google Inc_., 05 Civ. 8136-DC, United States Southern
District of New York, November 14, 2013,
[http://www.nysd.uscourts.gov/cases/show.php?db=special&id=355](http://www.nysd.uscourts.gov/cases/show.php?db=special&id=355).
38.  _Authors Guild v. Google, Inc_., 13–4829-cv, December 16, 2015,
81c0-23db25f3b301/1/doc/13-4829_opn.pdf>. In the aftermath of Pierre Leval’s
decision the Authors Guild has yet again filed yet another petition for the
Supreme Court to reverse the appeals court decision, and has publically
reiterated the framing of Google as a parasite rather than a benefactor. A
brief supporting the Guild’s petition and signed by a diverse group of authors
such as Malcolm Gladwell, Margaret Atwood, J. M. Coetzee, Ursula Le Guin, and
Yann Martel noted that the legal framework used to assess Google knew nothing
about “the digital reproduction of copyrighted works and their communication
on the Internet or the phenomenon of ‘mass digitization’ of vast collections
of copyrighted works”; nor, they argued, was the fair-use doctrine ever
intended “to permit a wealthy for-profit entity to digitize millions of works
and to cut off authors’ licensing of their reproduction, distribution, and
public display rights.” Amicus Curiae filed on behalf of Author’s Guild
Petition, No. 15–849, February 1, 2016, content/uploads/2016/02/15-849-tsac-TAA-et-al.pdf>. 39. Oxford English
Dictionary,
[http://www.oed.com/view/Entry/40328?rskey=bCMOh6&result=1&isAdvanced=false#eid8462140](http://www.oed.com/view/Entry/40328?rskey=bCMOh6&result=1&isAdvanced=false#eid8462140).
40. The contract as we know it today developed within the paradigm of Lex
Mercatoria; see Teubner 1997. The contract is therefore a device of global
reach that has developed “mainly outside the political structures of nation-
states and international organisations for exchanges primarily in a market
economy” (Snyder 2002, 8). In the contract theory of John Locke, the
signification of contracts developed from a mere trade tool to a distinction
between the free man and the slave. Here, the societal benefits of contracts
were presented as a matter of time, where the bounded delineation of work was
characterized as contractual freedom; see Locke 2003 and Stanley 1998. 41.
Sumner 1952, 23. 42. Paul Courant, “On Being in Bed with Google,” _Au Courant_
, November 4, 2007, google>. 43. Kaufman and Ubois 2007. 44. Bottando 2012. 45. Jessamyn West,
“Google’s Slow Fade With Librarians: Maybe They’re Just Not That Into Us,”
_Medium_ , February 2, 2015, with-librarians-fddda838a0b7>. 46. Suchman 2003. The lack of research into
contracts and emotions is noted by Hillary M. Berk in her fascinating research
on contracts in the field of surrogacy: “Despite a rich literature in law and
society embracing contracts as exchange relations, empirical work has yet to
address their emotional dimensions” (Berk 2015). 47. Suchman 2003, 100. 48.
See a selection on the Public Index:
, and The Internet Archive:
. You may also find
contracts here: the University of Michigan ( /michigan-digitization-project>), the University of Cali­fornia
(), the Committee on
Institutional Cooperation ( google-agreement>), and the British Library
( google-books-and-the-british-library>), to name but a few. 49. Javier Ruiz,
“Is the Deal between Google and the British Library Good for the Public?,”
Open Rights Group, August 24, 2011, /access-to-the-agreement-between-google-books-and-the-british-library>. 50.
Kaufman and Ubois 2007. 51. Association of Research Libraries, “ARL Encourages
Members to Refrain from Signing Nondisclosure or Confidentiality Clauses,”
_ARL News_ , June 5, 2009, encourages-members-to-refrain-from-signing-nondisclosure-or-confidentiality-
clauses#.Vriv-McZdE4>. 52. Google, “About the Library Project,” _Google Books
Help,_ n.d.,
[https://support.google.com/books/partner/faq/3396243?hl=en&rd=1](https://support.google.com/books/partner/faq/3396243?hl=en&rd=1).
53. Flyverbom, Leonardi, Stohl, and Stohl 2016. 54. Levy 2011, 354. 55. Levy
2011, 352. 56. To be sure, however, the practice of secrecy is no stranger to
libraries. Consider only the closed stack that the public is never given
access to; the bureaucratic routines that are kept from the public eye; and
the historic relation between libraries and secrecy so beautifully explored by
Umberto Eco in numerous of his works. Yet, the motivations for nondisclosure
agreements on the one hand and public sector secrets on the other differ
significantly, the former lodged in a commercial logic and the latter in an
idea, however abstract, about “the public good.” 57. Belder 2015. For insight
into the societal impact of contractual regimes on civil rights regimes, see
Somers 2008. For insight into relations between neoliberalism and contracts,
see Mitropoulos 2012. 58. As engineer and historian Henry Petroski notes, for
a PPP contract to be successful a contract must be written “properly” but “the
public partners are not often very well versed in these kinds of contracts and
they don’t know how to protect themselves.” See Buckholtz 2016. 59. As argued
by Lucky Belder in “Cultural Heritage Institutions as Entrepreneurs,” 2015.
60. Borghi 2013, 92–115. 61. Stephan Heyman, “Google Books: A Complex and
Controversial Experiment,” _New York Times_ , October 28, 2015,
and-controversial-experiment.html>. 62. Google, “Library Partners,” _Google
Books_ , . 63. Andrew
Prescott, “How the Web Can Make Books Vanish,” _Digital Riffs_ , August 2013,
.
64. Pechenick, Danforth, Dodds, and Barrat 2015. 65. What Pechenik et al.
refer to here is of course the claims of Erez Aiden and Jean-Baptiste Michel
among others, who promote “culturomics,” that is, the use of huge amounts of
digital information—in this case the corpus of Google Books—to track changes
in language, culture, and history. See Aiden and Michel 2013; and Michel et
al. 2011. 66. Neubert 2008; and Weiss and James 2012, 1–3. 67. I am indebted
to Gayatri Spivak here, who makes this argument about New York in the context
of globalization; see Spivak 2000. 68. In this respect Google mirrors the
glocalization strategies of media companies in general; see Thussu 2007, 19.
69. Although the decisions of foreign legislation of course also affect the
workings of Google, as is clear from the growing body of European regulatory
casework on Google such as the right to be forgotten, competition law, tax,
etc.

# 3
Sovereign Soul Searching: The Politics of Europeana

## Introduction

In 2008, the European Commission launched the European mass digitization
project, Europeana, to great fanfare. Although the EC’s official
communications framed the project as a logical outcome of years of work on
converging European digital library infrastructures, the project was received
in the press as a European counterresponse to Google Books.1 The popular media
framings of Europeana were focused in particular on two narratives: that
Europeana was a public response to Google’s privatization of cultural memory,
and that Europeana was a territorial response to American colonization of
European information and culture. This chapter suggests that while both of
these sentiments were present in Europeana’s early years, the politics of what
Europeana was—and is—paints a more complicated picture. A closer glance at
Europeana’s social, economic, and legal infrastructures thus shows that the
European mass digitization project is neither an attempt to replicate Google’s
glocal model, nor is it a continuation of traditional European cultural
policies. Rather, Europeana produces a new form of cultural memory politics
that converge national and supranational imaginaries with global information
infrastructures.

If global information infrastructures and national politics today seemingly go
hand in hand in Europeana, it wasn’t always so. In fact, in the 1990s,
networked technologies and national imaginaries appeared to be mutually
exclusive modes of existence. The fall of the Berlin Wall in 1989 nourished a
new antisovereign sentiment, which gave way to recurring claims in the 1990s
that the age of sovereignty had passed into an age of post-sovereignty. These
claims were fueled by a globalized set of economic, political, and
technological forces, not least of which the seemingly ungovernable nature of
the Internet—which appeared to unbuckle the nation-state’s control and voice
in the process of globalization and gave rise to a sense of plausible anarchy,
which in turn made John Perry Barlow’s (in)famous ‘‘Declaration of the
Independence of Cyberspace’’ appear not as pure utopian fabulation, but rather
as a prescient diagnosis.2 Yet, while it seemed in the early 2000s that the
Internet and the cultural and economic forces of globalization had made the
notion and practice of the nation-state redundant on both practical and
cultural levels, the specter of the nation nevertheless seemed to linger.
Indeed, the nation-state continued to remain a fixed point in political and
cultural discourses. In fact, it not only lingered as a specter, but borders
were also beginning to reappear as regulatory forces. The borderless world
was, as Tim Wu and Jack Goldsmith noted in 2006, an illusion;3 geography had
revenged itself, not least in the digital environment.4

Today, no one doubts the cultural-political import of the national imaginary.
The national imaginary has fueled antirefugee movements, the surge of
nationalist parties, the EU’s intensified crisis, and the election of Donald
Trump, to name just a few critical political events in the 2010s. Yet, while
the nationalist imaginary is becoming ever stronger, paradoxically its
communicative infrastructures are simultaneously becoming ever more
globalized. Thus, globally networked digital infrastructures are quickly
supplementing, and in many cases even substituting, those national
communicative infrastructures that were instrumental in establishing a
national imagined community in the first place—infrastructures such as novels
and newspapers.5 The convergence of territorially bounded imaginaries and
global networks creates new cultural-political constellations of cultural
memory where the centripetal forces of nationalism operate alongside,
sometimes with and sometimes against, the centrifugal forces of digital
infrastructures. Europeana is a preeminent example of these complex
infrastructural and imaginary dynamics.

## A European Response

When Google announced their digitization program at the Frankfurt Book Fair in
2004, it instantly created ripples in the European cultural-political
landscape, in France in particular. Upon hearing the news about Google’s
plans, Jacques Chirac, president of France at the time, promptly urged the
then-culture minister, Renaud Donnedieu de Vabres, and Jean-Noël Jeanneney,
head of France’s Bibliothèque nationale, to commence a similar digitization
project and to persuade other European countries to join them.6 The seeds for
Europeana were sown by France, “the deepest, most sedimented reservoir of
anti-American arguments,”7 as an explicitly political reaction to Google
Books.

Europeana was thus from its inception laced with the ambiguous political
relationship between two historically competing universalist-exceptionalist
nations: the United States and France.8 A relationship that France sometimes
pictures as a question of Americanization, and at other times extends to an
image of a more diffuse Anglo-Saxon constellation. Highlighting the effects
Google Books would have on French culture, Jeanneney argued that Google’s mass
digitization efforts would pose several possible dangers to French cultural
memory such as bias in the collecting and organizing practices of Google Books
and an Anglicization of the cultural memory regulatory system. Explaining why
Google Books should be seen not only as an American, but also as an Anglo-
Saxon project, Jeanneney noted that while Google Books “was obviously an
American project,” it was nevertheless also one “that reached out to the
British.” The alliance between the Bodleian Library at Oxford and Google Books
was thus not only a professional partnership in Jeanneney’s eyes, but also a
symbolic bond where “the familiar Anglo-Saxon solidarity” manifested once
again vis-à-vis France, only this time in the digital sphere. Jeanneney even
paraphrased Churchill’s comment to Charles de Gaulle, noting that Oxford’s
alliance with Google Books yet again evidenced how British institutions,
“without consulting anyone on the other side of the English Channel,” favored
US-UK alliances over UK-Continental alliances “in search of European
patriotism for the adventure under way.”9

How can we understand Jeanneney’s framing of Google Books as an Anglo-Saxon
project and the function of this framing in his plea for a nation-based
digitization program? As historian Emile Chabal suggests, the concept of the
Anglo-Saxon mentality is a preeminently French construct that has a clear and
rich rhetorical function to strengthen the French self-understanding vis-à-vis
a stereotypical “other.”10 While fuzzy in its conceptual infrastructure, the
French rhetoric of the Anglo-Saxon is nevertheless “instinctively understood
by the vast majority of the French population” to denote “not simply a
socioeconomic vision loosely inspired by market liberalism and
multiculturalism” but also (and sometimes primarily) “an image of
individualism, enterprise, and atomization.”11 All these dimensions were at
play in Jeanneney’s anti-Google Books rhetoric. Indeed, Jeanneney suggested,
Google’s mass digitization project was not only Anglo-Saxon in its collecting
practices and organizational principles, but also in its regulatory framework:
“We know how Anglo-Saxon law competes with Latin law in international
jurisdictions and in those of new nations. I don’t want to see Anglo-Saxon law
unduly favored by Google as a result of the hierarchy that will be
spontaneously established on its lists.”12

What did Jeanneney suggest as infrastructural protection against the network
power of the Anglo-Saxon mass digitization project? According to Jeanneney,
the answer lay in territorial digitization programs: rather than simply
accepting the colonizing forces of the Anglo-Saxon matrix, Jeanneney argued, a
national digitization effort was needed. Such a national digitization project
would be a “ _contre-attaque_ ” against Google Books that should protect three
dimensions of French cultural sovereignty: its language, the role of the state
in cultural policy, and the cultural/intellectual order of knowledge in the
cultural collections.13 Thus Jeanneney suggested that any Anglo-Saxon mass
digitization project should be competed against and complemented by mass
digitization projects from other nations and cultures to ensure that cultural
works are embedded in meaningful cultural contexts and languages. While the
nation was the central base of mass digitization programs, Jeanenney noted,
such digitization programs necessarily needed to be embedded in a European, or
Continental, infrastructure. Thus, while Jeanneney’s rallying cry to protect
the French cultural memory was voiced from France, he gave it a European
signature, frequently addressing and including the rest of Europe as a natural
ally in his _contre-attaque_ against Google Books. 14 Jeanenney’s extension of
French concerns to a European level was characteristic for France, which had
historically displayed a leadership role in formulating and shaping the EU.15
The EU, Jeanneney argued, could provide a resilient supranational
infrastructure that would enable French diversity to exist within the EU while
also providing a protective shield against unhampered Anglo-Saxon
globalization.

Other French officials took on a less combative tone, insisting that the
French digitization project should be seen not merely as a reaction to Google
but rather in the context of existing French and European efforts to make
information available online. “I really stress that it’s not anti-American,”
stated one official at the Ministry of Culture and Communication. Rather than
framing the French national initiatives as a reaction to Google Books, the
official instead noted that the prime objective was to “make more material
relevant to European patrimony available,” noting also that the national
digitization efforts were neither unique nor exclusionary—not even to
Google.16 The disjunction between Jeanneney’s discursive claims to mass
digitization sovereignty and the anonymous bureaucrat’s pragmatic and
networked approach to mass digitization indicates the late-sovereign landscape
of mass digitization as it unfolded between identity politics and pragmatic
politics, between discursive claims to sovereignty and economic global
cooperation. And as the next section shows, the intertwinement of these
discursive, ideological, and economic infrastructures produced a memory
politics in Europeana that was neither sovereign nor post-sovereign, but
rather late-sovereign.

## The Infrastructural Reality of Late-Sovereignty

Politically speaking, Europeana was always more than just an empty
countergesture or emulating response to Google. Rather, as soon as the EU
adopted Europeana as a prestige project, Europeana became embedded in the
political project of Europeanization and began to produce a political logic of
its own. Latching on to (rather than countering) a sovereign logic, Europeana
strategically deployed the European imaginary as a symbolic demarcation of its
territory. But the means by which Europeana was constructed and distributed
its territorial imaginaries nevertheless took place by means of globalized
networked infrastructures. The circumscribed cultural imaginary of Europeana
was thus made interoperable with the networked logic of globalization. This
combination of a European imaginary and neoliberal infrastructure in Europeana
produced an uneasy balance between national and supranational infrastructural
imaginaries on the one hand and globalized infrastructures on the other.

If France saw Europeana primarily through the prism of sovereign competition,
the European Commission emphasized a different dispositive: economic
competition. In his 2005 response to Jaques Chirac, José Manuel Barroso
acknowledged that the digitization of European cultural heritage was an
important task not only for nation-states but also for the EU as a whole.
Instead of the defiant tone of Jeanneney and De Vabres, Barraso and the EU
institutions opted for a more neutral, pragmatic, and diplomatic mass
digitization discourse. Instead of focusing on Europeana as a lever to prop up
the cultural sovereignty of France, and by extension Europe, in the face of
Americanization, Barosso framed Europeana as an important economic element in
the construction of a knowledge economy.17

Europeana was thus still a competitive project, but it was now reframed as one
that would be much more easily aligned with, and integrated into, a global
market economy.18 One might see the difference in the French and the EU
responses as a question of infrastructural form and affordance. If French mass
digitization discourses were concerned with circumscribing the French cultural
heritage within the territory of the nation, the EC was in practice more
attuned to the networked aspects of the global economy and an accompanying
discourse of competition and potentiality. The infrastructural shift from
delineated sphere to globalized network changed the infrapolitics of cultural
memory from traditional nation-based issues such as identity politics
(including the formation of canons) to more globally aligned trade-related
themes such as copyright and public-private governance.

The shift from canon to copyright did not mean, however, that national
concerns dissipated. On the contrary, ministers from the European Union’s
member countries called for an investigation into the way Google Books handled
copyright in 2008.19 In reality, Google Books had very little to do with
Europe at that time, in the sense that Google Books was governed by US
copyright law. Yet the global reach of Google Books made it a European concern
nevertheless. Both German and French representatives emphasized the rift
between copyright legislation in the US and in EU member states. The German
government proposed that the EC examine whether Google Books conformed to
Europe’s copyright laws. In France, President Nicolas Sarkozy stated in more
flamboyant terms that he would not permit France to be “stripped of our
heritage to the benefit of a big company, no matter how friendly, big, or
American it is.”20 Both countries moreover submitted _amicus curia_ briefs 21
to judge Denny Chin (who was in charge of the ongoing Google Books settlement
lawsuit in the US22), in which they argued against the inclusion of foreign
authors in the lawsuit.23 They further brought separate suits against Google
Books for their scanning activities and sought to exercise diplomatic pressure
against the advancement of Google Books.24

On an EU level, however, the territorial concerns were sidestepped in favor of
another matrix of concern: the question of public-private governance. Thus,
despite pressure from some member states, the EC decided not to write a
similar “amicus brief” on behalf of the EU.25 Instead, EC Commissioners
McCreevy and Reding emphasized the need for more infrastructures connecting
the public and private sectors in the field of mass digitization.26 Such PPPs
could range from relatively conservative forms of cooperation (e.g., private
sponsoring, or payments from the private sector for links provided by
Europeana) to more far-reaching involvement, such as turning the management of
Europeana over to the private sector.27 In a similar vein, a report authored
by a high-level reflection group (Comité des Sages) set down by the European
Commission opened the door for public-private partnerships and also set a time
frame for commercial exploitation.28 It was even suggested that Google could
play a role in the construction of Europeana. These considerations thus
contrasted the French resistance against Google with previous statements made
by the EC, which were concerned with preserving the public sector in the
administration of Europeana.

Did the European Commission’s networked politics signal a post-sovereign
future for Europeana? This chapter suggests no: despite the EC’s strategies,
it would be wrong to label the infrapolitics of Europeana as post-sovereign.
Rather, Europeana draws up a _late-sovereign_ 29 mass digitization landscape,
where claims to national sovereignty exist alongside networked
infrastructures.30 Why not post-sovereign? Because, as legal scholar Neil
Walker noted in 2003,31 the logic of sovereignty never waned even in the face
of globalized capitalism and legal pluralism. Instead, it fused with these
more globalized infrastructures to produce a form of politics that displayed
considerable continuity with the old sovereign order, yet also had distinctive
features such as globalized trade networks and constitutional pluralisms. In
this new system, seemingly traditional claims to sovereignty are carried out
irrespective of political practices, showing that globally networked
infrastructures and sovereign imaginaries are not necessarily mutually
exclusive; rather, territory and nation continue to remain powerful emotive
forces. Since Neil Walker’s theoretical corrective to theories on post-
sovereignty, the notion of late sovereignty seems to have only gained in
relevance as nationalist imaginaries increase in strength and power through
increasingly globalized networks.

As the following section shows, Europeana is a product of political processes
that are concerned with both the construction of bounded spheres and canons
_and_ networked infrastructures of connectivity, competition, and potentiality
operating beyond, below, and between national societal structures. Europeana’s
late-sovereign framework produces an infrapolitics in which the discursive
political juxtaposition between Europeana and Google Books exists alongside
increased cooperation between Google Books and Europeana, making it necessary
to qualify the comparative distinctions in mass digitization projects on a
much more detailed level than merely territorial delineations, without,
however, disposing of the notion of sovereignty. The simultaneous
contestations and connections between Europeana and Google Books thus make
visible the complex economic, intellectual, and technological infrastructures
at play in mass digitization.

What form did these infrastructures take? In a sense, the complex
infrastructural set-up of Europeana as it played out in the EU’s framework
ended up extending along two different axes: a vertical axis of national and
supranational sovereignty, where the tectonic territorial plates of nation-
states and continents move relative to each other by converging, diverging,
and transforming; and a horizontal axis of deterritorializing flows that
stream within, between, and throughout sovereign territories consisting both
of capital interests (in the form of transnational lobby organizations working
to protect, promote, and advance the interests of multinational companies or
nongovernmental organizations) and the affective relations of users.

## Harmonizing Europe: From Canon to Copyright

Even if the EU is less concerned with upholding the regulatory boundaries of
the nation-state in mass digitization, bordering effects are still found in
mass digitized collections—this time in the form of copyright regulation. As
in the case of Google Books, mass digitization also raised questions in Europe
about the future role of copyright in the digital sphere. On the one hand,
cultural industries were concerned about the implications of mass digitization
for their production and copyrights32; on the other hand, educational
institutions and digital industries were interested in “unlocking” the
cognitive and cultural potentials that resided within the copyrighted
collections in cultural heritage institutions. Indeed, copyright was such a
crucial concern that the EC repeatedly stated the necessity to reform and
harmonize European copyright regulation across borders.

Why is copyright a concern for Europeana? Alongside economic challenges, the
current copyright legislation is _the_ greatest obstacle against mass
digitization. Copyright effectively prohibits mass digitization of any kind of
material that is still within copyright, creating large gaps in digitized
collections that are often referred to as “the twentieth-century black hole.”
These black holes appear as a result of the way European “copyright interacts
with the digitization of cultural heritage collections” and manifest
themselves as “marked lack of online availability of twentieth-century
collections.” 33 The lack of a common copyright mechanism not only hinders
online availability, but also challenges European cross-border digitization
projects as well as the possibilities for data-mining collections à la Google
because of the difficulties connected to ascertaining the relevant
public domain and hence definitively flagging the public domain status of an
object.34

While Europeana’s twentieth-century black hole poses a problem, Europe would
not, as one worker in the EC’s Directorate-General (DG) Copyright unit noted,
follow Google’s opt-out mass digitization strategy because “the European
solution is not the Google solution. We do a diligent search for the rights
holder before digitizing the material. We follow the law.”35 By positioning
herself as on the right side of the law, the DG employee implicitly also
placed Google on the wrong side of the law. Yet, as another DG employee
explained with frustration, the right side of the law was looking increasingly
untenable in an age of mass digitization. Indeed, as she noted, the demands
for diligent search was making her work near impossible, not least due to the
different legal regimes in the US and the EU:

> Today if one wants to digitize a work, one has to go and ask the rights
holder individually. The problem is often that you can’t find the rights
holder. And sometimes it takes so much time. So there is a rights holder, you
know that he would agree, but it takes so much time to go and find out. And
not all countries have collective management … you have to go company by
company. In Europe we have producing companies that disappear after the film
has been made, because they are created only to make that film. So who are you
going to ask? While in the States the situation is different. You have the
majors, they have the rights, you know who to ask because they are very
stable. But in Europe we have this situation, which makes it very difficult,
the cultural access to cultural heritage. Of course we dream of changing
this.36

The dream is far from realized, however. Since the EU has no direct
legislative competence in the area of copyright, Europeana is the center of a
natural tension between three diverging, but sometimes overlapping instances:
the exclusivity of national intellectual property laws, the economic interests
toward a common market, and the cultural interests in the free movement of
information and knowledge production—a tension that is further amplified by
the coexistence of different legal traditions across member states.37 Seeking
to resolve this tension, the European Parliament and certain units in the
European Commission have strategically used Europeana as a rhetorical lever to
increase harmonization of copyright legislation and thus make it easier for
institutions to make their collections available online.38 “Harmonization” has
thus become a key concept in the rights regime of mass digitization,
essentially signaling interoperability rather than standardization of national
copyright regimes. Yet stakeholders differ in their opinions concerning who
should hold what rights over what content, over what period of time, at what
price, and how things should be made available. So within the process of
harmonization is a process that is less than harmonious, namely bringing
stakeholders to the table and committing. As the EC interviewee confirms,
harmonization requires not only technical but also political cooperation.

The question of harmonization illustrates the infrapolitical dimensions of
Europeana’s copyright systems, showing that they are not just technical
standards or “direct mirrors of reality” but also “co-produced responses to
technoscientific and political uncertainty.”39 The European attempts to
harmonize copyright standards across national borders therefore pit not only
one technical standard against the other, but also “alternative political
cultures and their systems of public reasoning against one another”40
(Jasanoff, 133). Harmonization thus compresses, rather than eliminates,
national varieties within Europe.41 Hence, Barroso’s vision of Europeana as a
collective _European_ cultural memory is faced with the fragmented patterns of
national copyright regimes, producing if not overtly political borders in the
collections, then certainly infrapolitical manifestations of the cultural
barriers that still exist between European countries.

## The Infrapolitics of Interoperability

Copyright is not the only infrastructural regime that upholds borders in
Europeana’s collections; technical standards also pose great challenges for
the dream of an European connective cultural memory.42 The notion of
_interoperability_ 43 has therefore become a key concern for mass
digitization, as interoperability is what allows digitized cultural memory
institutions to exchange and share documents, queries, and services.44

The rise of interoperability as a key concept in mass digitization is a side-
effect of the increasing complexity of economic, political, and technological
networks. In the twentieth century, most European cultural memory institutions
existed primarily as small “sovereign” institutions, closed spheres governed
by internal logics and with little impetus to open up their internal machinery
to other institutions and cooperate. The early 2000s signaled a shift in the
institutional infrastructural layout of cultural memory institutions, however.
One early significant articulation of this shift was a 324-page European
Commission report entitled _Technological Landscapes for Tomorrow’s Cultural
Economy: Unlocking the Value of Cultural Heritage_ (or the DigiCULT study), a
“roadmap” that outlined the political, organizational, and technological
challenges faced by European museums, libraries, and archives in the period
2002–2006. A central passage noted that the “conditions for success of the
cultural and memory institutions in the Information Society is (sic) the
‘network logic,’ a logic that is of course directly related to the necessity
of being interoperable.” 45 The network logic and resulting demand for
interoperability was not merely a question of digital connections, the report
suggested, but a more pervasive logic of contemporary society. The report thus
conceived interoperability as a question that ran deeper that technological
logic.46 The more complex cultural memory infrastructures become, the more
interoperability is needed if one wants the infrastructures to connect and
communicate with each other.47 As information scholar Christine Borgman notes,
interoperability has therefore long been “the holy grail of digital
libraries”—a statement echoed by Commissioner Reding on Europeana in 2005 when
she stated that “I am not suggesting that the Commission creates a single
library. I envisage a network of many digital libraries—in different
institutions, across Europe.”48 Reding’s statement shows that even at the
height of the French exceptionalist discourse on European mass digitization,
other political forces worked instead to reformat the sovereign sphere into a
network. The unravelling of the bounded spheres of cultural memory
institutions into networked infrastructures is therefore both an effect of,
and the further mobilization of, increased interoperability.

Interoperability is not only a concern for mass digitization projects,
however; rather, the calls for interoperability takes place on a much more
fundamental level. A European Council Conclusion on Europeana identifies
interoperability as a key challenge for the future construction of Europeana,
but also embeds this concern within the overarching European interoperability
strategy, _European Interoperability Framework for pan-European eGovernment
services_. 49 Today, then, interoperability appears to be turning into a
social theory. The extension of the concept of interoperability into the
social sphere naturally follows the socialization of another technical term:
infrastructure. In the past decades, Susan Leigh Star, Geoffrey Bowker, and
others have successfully managed to frame infrastructure “not only in terms of
human versus technological components but in terms of a set of interrelated
social, organizational, and technical components or systems (whether the data
will be shared, systems interoperable, standards proprietary, or maintenance
and redesign factored in).”50 It follows, then, as Christine Borgman notes,
that even if interoperability in technical terms is a “feature of products and
services that allows the connection of people, data, and diverse systems,”51
policy practice, standards and business models, and vested interest are often
greater determinants of interoperability than is technology.52 In similar
terms, information science scholar Jerome Mcdonough notes that “we need to
cease viewing [interoperability] purely as a technical problem, and
acknowledge that it is the result of the interplay of technical and social
factors.”53 Pushing the concept of interoperability even further, legal
scholars Urs Gasser and John Palfrey have even argued for viewing the world
through a theory of interoperability, naming their project “interop theory,”54
while Internet governance scholar Laura Denardis proposes a political theory
of interoperability.55

More than denoting a technical fact, then, interoperability emerges today as
an infrastructural logic, one that promotes openness, modularity, and
connectivity. Within the field of mass digitization, the notion of
interoperability is in particular promoted by the infrastructural workers of
cultural memory (e.g., archivists, librarians, software developers, digital
humanists, etc.) who dream of opening up the silos they work on to enrich them
with new meanings.56 As noted in chapter 1, European cultural memory
institutions had begun to address unconnected institutions as closed “silos.”
Mass digitization offered a way of thinking of these institutions anew—not as
frigid closed containers, but rather as vital connective infrastructures.
Interoperability thus gives rise to a new infrastructural form of cultural
memory: the traditional delineated sovereign spheres of expertise of analog
cultural memory institutions are pried open and reformatted as networked
ecosystems that consist not only of the traditional national public providers,
but also of additional components that have hitherto been alien in the
cultural memory industry, such as private individual users and commercial
industries.57

The logic of interoperability is also born of a specific kind of
infrapolitics: the politics of modular openness. Interoperability is motivated
by the “open” data movements that seek to break down proprietary and
disciplinary boundaries and create new cultural memory infrastructures and
ways of working with their collections. Such visions are often fueled by
Lawrence Lessig’s conviction that “the most important thing that the Internet
has given us is a platform upon which experience is interoperable.”58 And they
have given rise to the plethora of cultural concepts we find on the Internet
in the age of digital capitalism, such as “prosumers”, “produsers”, and so on.
These concepts are becoming more and more pervasive in the digital environment
where “any format of sound can be mixed with any format of video, and then
supplemented with any format of text or images.”59 According to Lessig, the
challenge to this “open” vision are those “who don’t play in this
interoperability game,” and the contestation between the “open” and the
“closed” takes place in the “the network,” which produces “a world where
anyone can clip and combine just about anything to make something new.”60

Despite its centrality in the mass digitization rhetoric, the concept of
interoperability and the politics it produces is rarely discussed in critical
terms. Yet, as Gasser and Palfrey readily conceded in 2007, interoperability
is not necessarily in itself an “unalloyed good.” Indeed, in “certain
instances,” Palfrey and Gasser noted, interoperability brings with it possible
drawbacks such as increased homogeneity, lack of security, lack of
reliability.61 Today, ten years on, Urs Gasser’s and John Palfrey’s admissions
of the drawbacks of interoperability appear too modest, and it becomes clear
that while their theoretical apparatus was able to identify the centrality of
interoperability in a digital world, their social theory missed its larger
political implications.

When scanning the literature and recommendations on interoperability, certain
words emerge again and again: innovation, choice, diversity, efficiency,
seamlessness, flexibility, and access. As Tara McPherson notes in her related
analysis of the politics of modularity, it is not much of a stretch to “layer
these traits over the core tenets of post-Fordism” and note their effect on
society: “time-space compression, transformability, customization, a
public/private blur, etc.”62 The result, she suggests, is a remaking of the
Fordist standardization processes into a “neoliberal rule of modularity.”
Extending McPherson’s critique into the temporal terrain, Franco Bifo Berardi
emphasizes the semantic politics of speed that is also inherent in
connectivity and interoperability: “Connection implies smooth surfaces with no
margins of ambiguity … connections are optimized in terms of speed and have
the potential to accelerate with technological developments.63 The
connectivity enabled by interoperability thus implies modularity with
components necessarily “open to interfacing and interoperability.”
Interoperability, then, is not only a question of openness, but also a way of
harnessing network effects by means of speed and resilience.

While interoperability may be an inherent infrastructural tenet of neoliberal
systems, increased interoperability does not automatically make mass
digitization projects neoliberal. Yet, interoperability does allow for
increased connectivity between individual cultural memory objects and a
neoliberal economy. And while the neoliberal economy may emulate critical
discourses on freedom and creativity, its main concern is profit. The same
systems that allow users to create and navigate collections more freely are
made interoperable with neoliberal systems of control.64

## The “Work” in Networking

What are the effects of interoperability for the user? The culture of
connectivity and interoperability has not only allowed Europeana’s collections
to become more visible to a wider public, it has also enabled these publics to
become intentionally or unintentionally involved in the act of describing and
ordering these same collections, for instance by inviting users to influence
existing collections as well as to generate their own collections. The
increased interaction with works also transform them from stable to mobile
objects.65 Mass digitization has thus transformed curatorial practice,
expanding it beyond the closed spheres of cultural memory institutions into
much broader ecosystems and extending the focus of curatorial attention from
fixed objects to dynamic network systems. As a result, “curatorial work has
become more widely distributed between multiple agents including technological
networks and software.”66 From having played a central role in the curatorial
practice, the curator is now only part of this entire system and increasingly
not central to it. Sharing the curator’s place are users, algorithms, software
engineers, and a multitude of other factors.

At the same time, the information deluge generated by digitization has
enhanced the necessity of curation, both within and outside institutions. Once
considered as professional caretaking for collections, the curatorial concept
has now been modulated to encompass a whole host of activities and agents,
just as curatorial practices are now ever more engaged in epistemic meaning
making, selecting and organizing materials in an interpretive framework
through the aggregation of global connection.67 And as the already monumental
and ever accelerating digital collections exceed human curatorial capacity,
the computing power of machines and cognitive capabilities of ordinary
citizens is increasingly needed to penetrate and make meaning of the data
accumulations.

What role is Europeana’s user given in this new environment? With the
increased modulation of public-private boundaries, which allow different
modules to take on different tasks and on different levels, the strict
separation between institution and environment is blurring in Europeana. So is
the separation between user, curator, consumer, and producer. New characters
have thus arisen in the wake of these transformations, hereunder the two
concepts of the “amateur” and the “citizen scientist.”

In contrast to much of the microlabor that takes place in the digital sphere,
Europeana’s participatory structures often consist in cognitive tasks that are
directly related to the field of cultural memory. This aligns with the
aspirations of the Citizen Science Alliance, which requires that all their
crowdsourcing projects answer “a real scientific research question” and “must
never waste the ‘clicks,’ or time, of volunteers.”68 Citizen science is an
emergent form of research practice in which citizens participate in research
projects on different levels and in different constellations with established
research communities. The participatory structures of citizen science range
from highly complex processes to more simple tasks, such as identifying
colors, themes, patterns that challenge machinic analyses, and so on. There
are different ways of classifying these participatory structures, but the most
prevalent participatory structures in Europeana include:

1. 1\. Contribution, where visitors are solicited to provide limited and specified objects, actions, or ideas to an institutionally controlled process, for example, Europeana’s _1914–1918_ exhibition, which allowed (and still allows) users to contribute photos, letters, and other memorabilia from that period.
2. 2\. Correction and transcription, where users correct faulty OCR scans of books, newspapers, etc.
3. 3\. Contextualization, that is, the practice of placing or studying objects in a meaningful context.
4. 4\. Augmenting collections, that is, enriching collections with additional dimensions. One example is the recently launched Europeana Sound Connections, which encourages and enables visitors to “actively enrich geo-pinned sounds from two data providers with supplementary media from various sources. This includes using freely reusable content from Europeana, Flickr, Wikimedia Commons, or even individuals’ own collections.”69
5. 5\. And finally, Europeana also offers participation through classification, that is, a social tagging system in which users contribute with classifications.

All these participatory structures fall within the general rubric of
crowdsourcing, and they are often framed in social terms and held up as an
altruistic alternative to the capitalist exploitation of other crowdsourcing
projects, because, as new media theorist Mia Ridge argues, “unlike commercial
crowdsourcing, participation in cultural memory crowdsourcing is driven by
pleasure, not profit. Rather than monetary recompense, GLAM (Galleries,
Museums, Archives, and Libraries) projects provide an opportunity for
altruistic acts, activated by intrinsic motivations, applied to inherently
engaging tasks, encouraged by a personal interest in the subject or task.”70
In addition—and based on this notion of altruism—these forms of crowdsourcing
are also subversive successors of, or correctives to, consumerism.

The idea of pitting the activities of citizen science against more simple
consumer logics has been at the heart of Europeana since its inception,
particularly influenced by the French philosopher Bernard Stiegler, who has
been instrumental not only in thinking about, but also building, Europeana’s
software infrastructures around the character of the “amateur.” Stiegler’s
thesis was that the amateur could subvert the industrial ethos of production
because he/she is not driven by a desire to consume as much as a desire to
love, and thus is able to imbue the archive with a logic different from pure
production71 without withdrawing from participation (the word “amateur” comes
from the French word _aimer_ ).72 Yet it appears to me that the convergence of
cultural memory ecosystems leaves little room for the philosophical idea of
mobilizing amateurism as a form of resistance against capitalist logics.73 The
blurring of production boundaries in the new cultural memory ecosystems raises
urgent questions to cultural memory institutions of how they can protect the
ethos of the amateur in citizen archives,74 while also aligning them with
institutional strategies of harvesting the “cognitive surplus” of users75 in
environments where play is increasingly taking on aspects of labor and vice
versa. As cultural theorist Angela Mitropoulos has noted, “networking is also
net-working.”76 Thus, while many of the participatory structures we find in
Europeana are participatory projects proper and not just what we might call
participation-lite—or minimal participation77—models, the new interoperable
infrastructures of cultural memory ecosystems make it increasingly difficult
to uphold clear-cut distinctions between civic practice and exploitation in
crowdsourcing projects.

## Collecting Europe

If Europeana is a late-sovereign mass digitization project that maintains
discursive ties to the national imaginary at the same time that it undercuts
this imaginary by means of networked infrastructures through increased
interoperability, the final question is: what does this late-sovereign
assemblage produce in cultural terms? As outlined above, it was an aspiration
of Europeana to produce and distribute European cultural memory by means of
mass digitization. Today, its collection gathers more than 50 million cultural
works in differing formats—from sound bites to photographs, textiles, films,
files, and books. As the previous sections show, however, the processes of
gathering the cultural artifacts have generated a lot of friction, producing a
political reality that in some respects reproduces and accentuates the
existing politics of cultural memory institutions in terms of representation
and ownership, and in other respects gives rise to new forms of cultural
memory politics that part ways with the political regimes of traditional
curatorial apparatuses.

The story of how Europeana’s initial collection was published and later
revised offers a good opportunity to examine its late-sovereign political
dynamics. Europeana launched in 2008, giving access to some 4.5 million
digital objects from more than 1,000 institutions. Shortly after its launch,
however, the site crashed for several hours. The reason given by EU officials
was that Europeana was a victim of its own success: “On the first day of its
launch, Europe’s digital library Europeana was overwhelmed by the interest
shown by millions of users in this new project … thousands of users searching
in the very same second for famous cultural works like the _Mona Lisa_ or
books from Kafka, Cervantes, or James Joyce. … The site was down because of
massive interest, which shows the enormous potential of Europeana for bringing
cultural treasures from Europe’s cultural institutions to the wide public.” 78
The truth, however, lay elsewhere. As a Europeana employee explained, the site
didn’t buckle under the enormous interest shown in it, but rather because
“people were hitting the same things everywhere.” The problem wasn’t so much
the way they were hitting on material, but _what_ they were hitting; the
Europeana employee explained that people’s search terms took the Commission by
surprise, “even hitting things the Commission didn’t want to show. Because
people always search for wrong things. People tend to look at pornographic and
forbidden material such as _Mein Kampf_ , etc.”79 Europeana’s reaction was to
shut down and redesign Europeana’s search interface. Europeana’s crash was not
caused by user popularity, but rather was caused by a decision made by the
Commission and Europeana staff to rework the technical features of Europeana
so that the most popular searches would not be public and to remove
potentially politically contentious material such as _Mein Kampf_ and nude
works by Peter Paul Rubens and Abraham Bloemaert, among others. Another
Europeana employee explained that the launch of Europeana had been forced
through before its time because of a meeting among the cultural ministers in
Europe, making it possible to display only a prototype. This beta version was
coded to reveal the most popular searches, producing a “carousel” of the same
content because, as the previous quote explains, people would search for the
same things, in particular “porn” and “ _Mein Kampf_ ,” allegedly leading the
US press to call Europeana a collection of fascist and porn material.

On a small scale, Europeana’s early glitch highlighted the challenge of how to
police the incoming digital flows from national cultural heritage institutions
for in-copyright works. With hundreds of different institutions feeding
hundreds of thousands of texts, images, and sounds into the portal, scanning
the content for illegal material was an impossible task for Europeana
employees. Many in-copyright works began flooding the portal. One in-copyright
work that appeared in the portal stood out in particular: Hitler’s _Mein
Kampf_. A common conception has been that _Mein Kampf_ was banned after WWII.
The truth was more complicated and involved a complex copyright case. When
Hitler died, his belongings were given to the state of Bavaria, including his
intellectual property rights to _Mein Kampf_. Since Hitler’s copyright was
transferred as part of the Allies’ de-Nazification program, the Bavarian state
allowed no one to republish the book. 80 Therefore, reissues of _Mein Kampf_
only reemerged in 2015, when the copyright was released. The premature digital
distribution of _Mein Kampf_ in Euro­peana was thus, according to copyright
legislation, illegal. While the _Mein Kampf_ case was extraordinary, it
flagged a more fundamental problem of how to police and analyze all the
incoming data from individual cultural heritage institutions.

On a more fundamental level, however, _Mein Kampf_ indicated not only a legal,
but also a political, issue for Europeana: how to deal with the expressions
that Europeana’s feedback mechanisms facilitated. Mass digitization promoted a
new kind of cultural memory logic, namely of feedback. Feedback mechanisms are
central to data-driven companies like Google because they offer us traces of
the inner worlds of people that would otherwise never appear in empirical
terms, but that can be catered to in commercial terms. 81 Yet, while the
traces might interest the corporation (or sociologist) on the hunt for
people’s hidden thoughts, a prestige project such as Europeana found it
untenable. What Europeana wanted was to present Europe’s cultural memory; what
they ended up showing was Europeans’ intense fascination with fascism and
porn. And this was problematic because Europeana was a political project of
representation, not a commercial project of capture.82

Since its glitchy launch, Europeana has refined its interface techniques, is
becoming more attuned to network analytics, and has grown exponentially both
in terms of institutional and in material scope. There are, at the time of
this writing, more than 50 million items in Europeana, and while its numbers
are smaller than Google Books, its scope is much larger, including images,
texts, sounds, videos, and 3-D objects. The platform features carefully
curated exhibitions highlighting European themes, from generalized exhibitions
about World War I and European artworks to much more specialized exhibitions
on, for instance, European cake culture.

But how is Europe represented in statistical terms? Since Europeana’s
inception, there have been huge variances in how much each nation-state
contributes to Europeana.83 So while Europeana is in principle representing
Europe’s collective cultural memory, in reality it represents a highly
fragmented image of Europe with a lot of European countries not even appearing
in the databases. Moreover, even these numbers are potentially misleading, as
one information scholar formerly working with Europeana notes: to pump up
their statistical representation, many institutions strategically invented
counting systems that would make their representation seem bigger than it
really is, for example, by declaring each scanned page in a medieval
manuscript as an object instead of as the entire work.84 The strategic acts of
volume increase are interesting mass digitization phenomena for many reasons:
first, they reveal the ultimately volume-based approach of mass digitization.
According to the scholar, this volume-based approach finds a political support
in the EC system, for whom “the object will always be quantitative” since
volume is “the only thing the commission can measure in terms of funding and
result.”85 In a way then, the statistics tell more than one story: in
political terms, they recount not only the classic tale of a fragmented Europe
but also how Europe is increasingly perceived, represented, and managed by
calculative technologies. In technical terms, they reveal the gray areas of
how to delineate and calculate data: what makes a data object? And in cultural
policy terms, they reflect the highly divergent prioritization of mass
digitization in European countries.

The final question is, then: how is this fragmented European collection
distributed? This is the point where Europeana’s territorial matrix reveals
its ultimately networked infrastructure. Europeana may be entered through
Google, Facebook, Twitter, and Pinterest, and vice versa. Therefore a click on
the aforementioned cake exhibition, for example, takes one straight to Google
Arts and Culture. The transportation from the Europeana platform to Google
happens smoothly, without any friction or notice, and if one didn’t look at
the change in URL, one would hardly notice the change at all since the
interface appears almost similar. Yet, what are the implications of this
networked nature? An obvious consequence is that Europeana is structurally
dependent on the social media and search engine companies. According to one
Europeana report, Google is the biggest source of traffic to the Europeana
portal, accounting for more than 50 percent of visits. Any changes in Google’s
algorithm and ranking index therefore significantly impact traffic patterns on
the Europeana portal, which in turn affects the number of Europeana pages
indexed by Google, which then directly impacts on the number of overall visits
to the Europeana portal.86 The same holds true for Facebook, Pinterest,
Google+, etc.

Held together, the feedback mechanisms, the statistical variance, and the
networked infrastructures of Europeana show just how difficult it is to
collect Europe in the digital sphere. This is not to say that territorial
sentiments don’t have power, however—far from it. Within the digital sphere we
are already seeing territorial statements circulated in Europe on both
national and supranational scales, with potentially far-reaching implications
on both. Yet, there is little to suggest that the territorial sentiments will
reproduce sovereign spheres in practice. To the extent that reterritorializing
sentiments are circulated in globalizing networks, this chapter has sought to
counter both ideas about post sovereignty and pure nationalization, viewing
mass digitization instead through the lens of late-sovereignty. As this
chapter shows, the notion of late-sovereignty allows us to conceptualize mass
digitization programs, such as Europeana, as globalized phenomena couched
within the language of (supra)national sovereignty. In the age where rampant
nationalist movements sweep through globalized communication networks, this
approach feels all the more urgent and applicable not only to mass
digitization programs, but also to reterritorializing communication phenomena
more broadly. Only if we take the ways in which the nationalist imaginary
works in the infrastructural reality of late capitalism, can we begin to
account for the infrapolitics of the highly mediated new territorial
imaginaries.

## Notes

1. Lefler 2007; Henry W., “Europe’s Digital Library versus Google,” Café
Babel, September 22, 2008, /europes-digital-library-versus-google.html>; Chrisafis 2008. 2. While
digitization did not stand apart from the political and economic developments
in the rapidly globalizing world, digital theorists and activists soon gave
rise to the Internet as an inherent metaphor for this integrative development,
a sign of the inevitability of an ultimately borderless world, where as
Negroponte notes, time zones would “probably play a bigger role in our digital
future than trade zones” (Negroponte 1995, 228). 3. Goldsmith and Wu 2006. 4.
Rogers 2012. 5. Anderson 1991. 6. “Jacques Chirac donne l’impulsion à la
création d’une bibliothèque numérique,” _Le Monde_ , March 16, 2005,
donne-l-impulsion-a-la-creation-d-une-bibliotheque-
numerique_401857_3246.html>. 7. Meunier 2007. 8. As Sophie Meunier reminds us,
the _Ursprung_ of the competing universalisms can be located in the two
contemporary revolutions that lent legitimacy to the universalist claims of
both the United States and France. In the wake of the revolutions, a perceived
competition arose between these two universalisms, resulting in French
intellectuals crafting anti-American arguments, not least when French
imperialism “was on the wane and American imperialism on the rise.” See
Meunier 2007, 141. Indeed, Muenier suggests, anti-Americanism is “as much a
statement about France as it is about America—a resentful longing for a power
that France no longer has” (ibid.). 9. Jeanneney 2007, 3. 10. Emile Chabal
thus notes how the term is “employed by prominent politicians, serious
academics, political commentators, and in everyday conversation” to “cover a
wide range of stereotypes, pre-conceptions, and judgments about the Anglo-
American world” (Chabal 2013, 24). 11. Chabal 2013, 24–25. 12. Jeanneney 2007.
13. While Jeanneney framed this French cultural-political endeavor as a
European “contre-attaque” against Google Books, he also emphasized that his
polemic was not at all to be read as a form of aggression. In particular he
pointed to the difficulties of translating the word _défie_ , which featured
in the title of the piece: “Someone rightly pointed out that the English word
‘defy,’ with which American reporters immediately rendered _défie,_ connotes a
kind of violence or aggressiveness that isn’t implied by the French word. The
right word in English is ‘challenge,’ which has a different implication, more
sporting, more positive, more rewarding for both sides” (Jeanneney 2007, 85).
14. See pages 12, 22, and 24 for a few examples in Jeanneney 2007. 15. On the
issue of the common currency, see, for instance, Martin and Ross 2004. The
idea of France as an appropriate spokesperson for Europe was familiar already
in the eighteenth century when Voltaire declared French “la Langue de
l’Europe”; see Bivort 2013. 16. The official thus first noted that, “Everybody
is working on digitization projects … cooperation between Google and the
European project could therefore well occur.” and later added that ”The worst
scenario we could achieve would be that we had two big digital libraries that
don’t communicate. … The idea is not to do the same thing, so maybe we could
cooperate, I don’t know. Frankly, I’m not sure they would be interested in
digitizing our patrimony. The idea is to bring something that is
complementary, to bring diversity. But this doesn’t mean that Google is an
enemy of diversity.” See Labi 2005. 17. Letter from Manuel Barroso to Jaques
Chirac, July 7, 2005,
[http://www.peps.cfwb.be/index.php?eID=tx_nawsecuredl&u=0&file=fileadmin/sites/numpat/upload/numpat_super_editor/numpat_editor/documents/Europe/Bibliotheques_numeriques/2005.07.07reponse_de_la_Commission_europeenne.pdf&hash=fe7d7c5faf2d7befd0894fd998abffdf101eecf1](http://www.peps.cfwb.be/index.php?eID=tx_nawsecuredl&u=0&file=fileadmin/sites/numpat/upload/numpat_super_editor/numpat_editor/documents/Europe/Bibliotheques_numeriques/2005.07.07reponse_de_la_Commission_europeenne.pdf&hash=fe7d7c5faf2d7befd0894fd998abffdf101eecf1).
18. As one EC communication noted, a digitization project on the scale of
Europeana could sharpen Europe’s competitive edge in digitization processes
compared to those in the US as well India and China; see European Commission,
“i2010: Digital Libraries,” _COM(2005) 465 final_ , September 30, 2005, [eur-
lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52005DC0465&from=EN](http
://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52005DC0465&from=EN).
19. “Google Books raises concerns in some member states,” as an anonymous
Czech diplomatic source put it; see Paul Meller, “EU to Investigate Google
Books’ Copyright Policies,” _PCWorld_ , May 28, 2009,
.
20. Pfanner 2011; Doward 2009; Samuel 2009. 21. Amicus brief is a legal term
that in Latin means “friend of the court.” Frequently, a person or group who
is not a party to a lawsuit, but has a strong interest in the matter, will
petition the court for permission to submit a brief in the action with the
intent of influencing the court’s decision. 22. See chapter 4 in this volume.
23. de la Durantaye 2011. 24. Kevin J. O’Brien and Eric Pfanner, “Europe
Divided on Google Book Deal,” _New York Times_ , August 23, 2009,
; see
also Courant 2009; Darnton 2009. 25. de la Durantaye 2011. 26. Viviane Reding
and Charlie McCreevy, “It Is Time for Europe to Turn over a New E-Leaf on
Digital Books and Copyright,” MEMO/09/376, September 7, 2009, [europa.eu/rapid
/press-release_MEMO-09-376_en.htm?locale=en](http://europa.eu/rapid/press-
release_MEMO-09-376_en.htm?locale=en). 27. European Commission,
“Europeana—Next Steps,” COM(2009) 440 final, August 28, 2009, [eur-
lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2009:0440:FIN:en:PDF](http
://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2009:0440:FIN:en:PDF).
28. “It is logical that the private partner seeks a period of preferential use
or commercial exploitation of the digitized assets in order to avoid free-
rider behaviour of competitors. This period should allow the private partner
to recoup its investment, but at the same time be limited in time in order to
avoid creating a one-market player situation. For these reasons, the Comité
set the maximum time of preferential use of material digitised in public-
private partnerships at maximum 7 years” (Niggemann 2011). 29. Walker 2003.
30. Within this complex environment it is not even possible to draw boundaries
between the networked politics of the EU and the sovereign politics of member
states. Instead, member states engage in double-talk. As political scientist
Sophie Meunier reminds us, even member states such as France engage in double-
talk on globalization, with France on the one hand becoming the “worldwide
champion of anti-globalization,” and on the other hand “a country whose
economy and society have quietly adapted to this much-criticized
globalization” (Meunier 2003). On political two-level games, see also Putnam
1988. 31. Walker 2003. 32. “Google Books Project to Remove European Titles,”
_Telegraph_ , September 7, 2009,
remove-European-titles.html>. 33. “Europeana Factsheet,” Europeana, September
28, 2015,
/copy-of-europeana-policy-illustrating-the-20th-century-black-hole-in-the-
europeana-dataset.pdf> . 34. C. Handke, L. Guibault, and J. J. Vallbé, “Is
Europe Falling Behind in Data Mining? Copyright’s Impact on Data Mining in
Academic Research,” 2015, id-12015-15-handke-elpub2015-paper-23>. 35. Interview with employee, DG
Copyright, DC Commission, 2010. 36. Interview with employee, DG Information
and Society, DC Commission, 2010. 37. Montagnani and Borghi 2008. 38. Julia
Fallon and Paul Keller, “European Parliament Demands Copyright Rules that
Allow Cultural Heritage Institutions to Share Collections Online,” Europeana
Pro, rules-better-fit-for-a-digital-age>. 39. Jasanoff 2013, 133 40. Ibid. 41. Tate
2001. 42. It would be tempting to suggest the discussion on harmonization
above would apply to interoperability as well. But while the concepts of
harmonization and interoperability—along with the neighboring term
standardization—are used intermittently and appear similar at first glance,
they nevertheless have precise cultural-legal meanings and implicate different
infrastructural set-ups. As noted above, the notion of harmonization is
increasingly used in the legal context of harmonizing regulatory
apparatuses—in the case of mass digitization especially copyright laws. But
the word has a richer semantic meaning, suggesting a search for commonalities,
literally by means of fitting together or arranging units into a whole. As
such the notion of harmony suggests something that is both pleasing and
presupposes a cohesive unit(y), for example, a door hinged to a frame, an arm
hinged to a body. While used in similar terms, the notion of interoperability
expresses a very different infrastructural modality. If harmonization suggests
unity, interoperability rather alludes to modularity. For more on the concepts
of standardization and harmonization in regulatory contexts, see Tay and
Parker 1990. 43. The notion of interoperability is often used to express a
system’s ability to transfer, render and connect to useful information across
systems, and calls for interoperability have increased as systems have become
increasingly complex. 44. There are “myriad technical and engineering issues
associated with connecting together networks, databases, and other computer-
based systems”; digitized cultural memory institutions have the option of
providing “a greater array of services” than traditional libraries and
archives from sophisticated search engines to document reformatting as rights
negotiations; digitized cultural memory materials are often more varied than
the material held in traditional libraries; and finally and most importantly,
mass digitization institutions are increasingly becoming platforms that
connect “a large number of loosely connected components” because no “single
corporation, professional organization, or government” would be able to
provide all that is necessary for a project such as Europeana; not least on an
international scale. EU-NSF Digital Library Working Group on Interoperability
between Digital Libraries Position Paper, 1998,
. 45.  _The
Digicult Report: Technological Landscapes for Tomorrow’s Cultural Economy:
Unlocking the Value of Cultural Heritage: Executive Summary_ (Luxembourg:
Office for Official Publications of the European Communities, 2002), 80. 46.
“… interoperability in organisational terms is not foremost dependent on
technologies,” ibid. 47. As such they align with what Internet governance
scholar Laura Denardis calls the Internet’s “underlying principle” (see
DeNardis 2014). 48. The results of the EC Working Group on Digital Library
Interoperability are reported in the briefing paper by Stephan Gradman
entitled “Interoperability: A Key Concept for Large Scale, Persistent Digital
Libraries” (Gradmann 2009). 49. “Semantic operability ensures that programmes
can exchange information, combine it with other information resources and
subsequently process it in a meaningful manner: _European Interoperability
Framework for pan-European eGovernment services_ , 2004,
. In the case of
Europeana, this could consist of the development of tools and technologies to
improve the automatic ingestion and interpretation of the metadata provided by
cultural institutions, for example, by mapping the names of artists so that an
artist known under several names is recognised as the same person.” (Council
Conclusions on the Role of Europeana for the Digital Access, Visibility and
Use of European Cultural Heritage,” European Council Conclusion, June 1, 2016,
.) 50.
Bowker, Baker, Millerand, and Ribes 2010. 51. Tsilas 2011, 103. 52. Borgman
2015, 46. 53. McDonough 2009. 54. Palfrey and Gasser 2012. 55. DeNardis 2011.
56. The .txtual Condition: Digital Humanities, Born-Digital Archives, and the
Future Literary; Palfrey and Gasser 2012; Matthew Kirschenbaum, “Distant
Mirrors and the Lamp,” talk at the 2013 MLA Presidential Forum Avenues of
Access session on “Digital Humanities and the Future of Scholarly
Communication.” 57. Ping-Huang 2016. 58. Lessig 2005 59. Ibid. 60. Ibid. 61.
Palfrey and Gasser 2012. 62. McPherson 2012, 29. 63. Berardi, Genosko, and
Thoburn 2011, 29–31. 64. For more on the nexus of freedom and control, see
Chun 2006. 65. The mere act of digitization of course inflicts mobility on an
object as digital objects are kept in a constant state of migration. 66. Krysa
2006. 67. See only the wealth of literature currently generated on the
“curatorial turn,” for example, O’Neill and Wilson 2010; and O’Neill and
Andreasen 2011. 68. Romeo and Blaser 2011. 69. Europeana Sound Connections,
collections-on-a-social-networking-platform.html>. 70. Ridge 2013. 71. Carolyn
Dinshaw has argued for the amateur’s ability in similar terms, focusing on her
potential to queer the archive (see Dinshaw 2012). 72. Stiegler 2003; Stiegler
n.d. The idea of the amateur as a subversive character precedes digitization,
of course. Think only of Roland Barthes’s idea of the amateur as a truly
subversive character that could lead to a break with existing ideologies in
disciplinary societies; see, for instance, Barthes’s celebration of the
amateur as a truly anti-bourgeois character (Barthes 1977 and Barthes 1981).
73. Not least in light of recent writings on the experience as even love
itself as a form of labor (see Weigel 2016). The constellation of love as a
form of labor has a long history (see Lewis 1987). 74. Raddick et al. 2009;
Proctor 2013. 75. “Many companies and institutions, that are successful
online, are good at supporting and harnessing people’s cognitive surplus. …
Users get the opportunity to contribute something useful and valuable while
having fun” (Sanderhoff, 33 and 36). 76. Mitropoulos 2012, 165. 77. Carpentier
2011. 78. EC Commission, “Europeana Website Overwhelmed on Its First Day by
Interest of Millions of Users,” MEMO/08/733, November 21, 2008,
. See also Stephen
Castle, “Europeana Goes Online and Is Then Overwhelmed,” _New York Times_ ,
November 21, 2008,
[nytimes.com/2008/11/22/technology/Internet/22digital.html](http://nytimes.com/2008/11/22/technology/Internet/22digital.html).
79. Information scholar affiliated with Europeana, interviewed by Nanna Bonde
Thylstrup, Brussels, Belgium, 2011. 80. See, for instance, Martina Powell,
“Bayern will mit ‘Mein Kampf’ nichts mehr zu tun haben,” _Die Zeit_ , December
13, 2013, soll-erscheinen>. Bavaria’s restrictive publishing policy of _Mein Kampf_
should most likely be interpreted as a case of preventive precaution on behalf
of the Bavarian State’s diplomatic reputation. Yet by transferring Hitler’s
author’s rights to the Bavarian Ministry, they allocated _Mein Kampf_ to an
existence in a gray area between private and public law. Since then, the book
has been the center of attention in a rift between, on the one hand, the
Ministry of Finance who has rigorously defended its position as the formal
rights holder, and, on the other hand, historians and intellectuals who,
supported the Bavarian science minister Wolfgang Heubisch, have argued that an
academic annotated version of _Mein Kampf_ should be made publicly accessible
in the name of Enlightenment. 81. Latour 2007. 82. Europeana’s more
traditional curatorial approach to mass digitization was criticized not only
by the media, but also others involved in mass digitization projects, who
claimed that Europeana had fundamentally misunderstood the point of mass
digitization. One engineer working on mass digitization projects is the
influential cultural software developer organization, IRI, argued that
Europeana’s production pattern was comparable to “launching satellites”
without thinking of the messages that are returned by the satellites. Google,
he argued, was differently attuned to the importance of feedback, because
“feedback is their business.” 83. In the most recent published report, Germany
contributes with about 15 percent and France with around 16 percent of the
total amount of available works. At the same time, Belgium and Slovenia only
count around 1 percent and Denmark along with Greece, Luxembourg, Portugal,
and a slew of other countries doesn’t even achieve representation in the pie
chart; see “Europeana Content Report,” August 6, 2015,
/europeana-dsi-ms7-content-report-august.pdf>. 84. Europeana information
scholar interview, 2011. 85. Ibid. 86. Wiebe de Jager, “MS15: Annual traffic
report and analysis,” Europeana, May 31 2014,
.

# 4
The Licit and Illicit Nature of Mass Digitization

## Introduction: Lurking in the Shadows

A friend has just recommended an academic book to you, and now you are dying
to read it. But you know that it is both expensive and hard to get your hands
on. You head down to your library to request the book, but you soon realize
that the wait list is enormous and that you will not be able to get your hands
on the book for a couple of weeks. Desperate, you turn to your friend for
help. She asks, “Why don’t you just go to a pirate library?” and provides you
with a link. A new world opens up. Twenty minutes later you have downloaded 30
books that you felt were indispensable to your bookshelf. You didn’t pay a
thing. You know what you did was illegal. Yet you also felt strangely
justified in your actions, not least spurred on by the enthusiastic words on
the shadow library’s front page, which sets forth a comforting moral compass.
You begin thinking to yourself: “Why are pirate libraries deemed more illegal
than Google’s controversial scanning project?” and “What are the moral
implications of my actions vis-à-vis the colonial framework that currently
dictates Europeana’s copyright policies?”

The existence of what this book terms shadow libraries raises difficult
questions, not only to your own moral compass but also to the field of mass
digitization. Political and popular discourses often reduce the complexity of
these questions to “right” and “wrong” and Hollywood narratives of pirates and
avengers. Yet, this chapter wishes to explore the deeper infrapolitical
implications of shadow libraries, setting out the argument that shadow
libraries offer us a productive framework for examining the highly complex
legal landscape of mass digitization. Rather than writing a chapter that
either supports or counters shadow libraries, the chapter seeks to chart the
complexity of the phenomenon and tease out its relevance for mass digitization
by framing it within what we might call an infrapolitics of parasitism.

In _The Parasite_ , a strange and fabulating book that brings together
information theory and cybernetics, physics, philosophy, economy, biology,
politics, and folk tales, French philosopher Michel Serres constructs an
argument about the conceptual figure of the parasite to explore the parasitic
nature of social relations. In a dizzying array of images and thought-
constructs, Serres argues against the idea of a balanced exchange of energy,
suggesting instead that our world is characterized by one parasite stealing
energy by feeding on another organism. For this purpose he reminds us of the
three meanings of parasite in the French language. In French, the term
parasite has three distinct, but related meanings. The first relates to one
organism feeding off another and giving nothing in return. Second, it refers
to the social concept of the freeloader, who lives off society without giving
anything in return. Both of these meanings are fairly familiar to most, and
lay the groundwork for our annoyance with both bugs and spongers. The third
meaning, however, is less known in most languages except French: here the
parasite is static noise or interference in a channel, interrupting the
seemingly balanced flow of things, mediating and thus transforming relations.
Indeed, for Serres, the parasite is itself a disruptive relation (rather than
entity). The parasite can also change positions of sender, receiver, and
noise, making it exceedingly difficult to discern parasite from nonparasite;
indeed, to such an extent that Serres himself exclaims “I no longer really
know how to say it: the parasite parasites the parasites.”1 Serres thus uses
his parasitic model to make a claim about the nature of cybernetic
technologies and the flow of information, arguing that “cybernetics gets more
and more complicated, makes a chain, then a network. Yet it is founded on the
theft of information, quite a simple thing.”2 The logic of the parasite,
Serres argues, is the logic of the interrupter, the “excluded third” or
“uninvited guest” who intercepts and confuses relations in a process of theft
that has a value both of destruction and a value of construction. The parasite
is thus a generative force, inventing, affecting, and transforming relations.
Hence, parasitism refers not only to an act of interference but also to an
interruption that “invents something new.”3

Michel Serres’s then-radical philosophy of the parasite is today echoed by a
broader recognition of the parasite as not only a dangerous entity, but also a
necessary mediator. Indeed, as Jeanette Samyn notes, we are today witnessing a
“pro-parasitic” movement in science in which “scientists have begun to
consider parasites and other pathogens not simply as problems but as integral
components of ecosystems.”4 In this new view, “… the parasite takes from its
host without ever taking its place; it creates new room, feeding off excess,
sometimes killing, but often strengthening its milieu.” In the following
sections, the lens of the parasite will help us explore the murky waters of
shadow libraries, not (only) as entities, but also as relational phenomena.
The point is to show how shadow libraries belong to the same infrapolitical
ecosystem as Google Books and Europeana, sometimes threatening them, but often
also strengthening them. Moreover, it seeks to show how visitors’ interactions
with shadow libraries are also marked by parasitical relations with Google,
which often mediates literature searches, thus entangling Google and shadow
libraries in a parasitical relationship where one feeds off the other and vice
versa.

Despite these entangled relations, the mass digitization strategies of shadow
libraries, Europeana, and Google Books differ significantly. Basically, we
might say that Google Books and Europeana each represent different strategies
for making material available on an industrial scale while maintaining claims
to legality. The sprawling and rapidly growing group of mass digitization
projects interchangeably termed shadow libraries represents a third set of
strategies. Shadow libraries5 share affinities with Europeana and Google Books
in the sense that they offer many of the same services: instant access to a
wealth of cultural works spanning journal articles, monographs, and textbooks
among others. Yet, while Google Books and Europeana promote visibility to
increase traffic, embed themselves in formal systems of communication, and
operate within the legal frameworks of public funding and private contracting,
shadow libraries in contrast operate in the shadows of formal visibility and
regulatory systems. Hence, while formal mass digitization projects such as
Google Books and Europeana publicly proclaim their desire to digitize the
world’s cultural memory, another layer of people, scattered across the globe
and belonging to very diverse environments, harbor the same aspirations, but
in much more subtle terms. Most of these people express an interest in the
written word, a moral conviction of free access, and a political view on
existing copyright regulations as unjust and/or untimely. Some also express
their fascination with the new wonders of technology and their new
infrastructural possibilities. Others merely wish to practice forms of access
that their finances, political regime, or geography otherwise prohibit them
from doing. And all of them are important nodes in a new shadowy
infrastructural system that provides free access worldwide to books and
articles on a scale that collectively far surpasses both Google and Europeana.

Because of their illicit nature, most analyses of shadowy libraries have
centered on their legal transgressions. Yet, their cultural trajectories
contain nuances that far exceed legal binaries. Approaching shadow libraries
through the lens of infrapolitics is helpful for bringing forth these much
more complex cultural mass digitization systems. This chapter explores three
examples of shadow libraries, focusing in particular on their stories of
origin, their cultural economies, and their sociotechnical infrastructures.
Not all shadow libraries fit perfectly into the category of mass digitization.
Some of them are smaller in size, more selective, and less industrial.
Nevertheless, I include them because their open access strategies allow for
unlimited downloads. Thus, shadow libraries, while perhaps selective in size
themselves, offer the opportunity to reproduce works at a massive and
distributed scale. As such, they are the perfect example of a mass
digitization assemblage.

The first case centers on lib.ru, an early Russia-based file-sharing platform
for exchanging books that today has grown into a massive and distributed file-
sharing project. It is primarily run by individuals, but it has also received
public funding, which shows that what at first glance appears as a simple case
of piracy simultaneously serves as a much more complex infrapolitical
structure. The second case, Monoskop, distinguishes itself by its boutique
approach to digitization. Monoskop too is characterized by its territorial
trajectory, rooted in Bratislava’s digital scene as an attempt to establish an
intellectual platform for the study of avant-garde (digital) cultures that
could connect its Bratislava-based creators to a global scene. Finally, the
chapter looks at UbuWeb, a shadow library dedicated to avant-garde cultural
works ranging from text and audio to images and film. Founded in 1996 as a US-
based noncommercial file-sharing site by poet Kenneth Goldsmith in response to
the marginal distribution of crucial avant-garde material, UbuWeb today offers
a wealth of avant-garde sound art, video, and textual works.

As the case studies show, shadow libraries have become significant mass
digitization infrastructures that offer the user free access to academic
articles and books, often by means of illegal file-sharing. They are informal
and unstable networks that rely on active user participation across a wide
spectrum, from deeply embedded people who have established file-sharing sites
to the everyday user occasionally sending the odd book or article to a friend
or colleague. As Lars Eckstein notes, most shadow libraries are characterized
not only by their informal character, but also by the speed with which they
operate, providing “a velocity of media content” which challenges legal
attacks and other forms of countermeasures.6 Moreover, shadow libraries also
often operate in a much more widely distributed fashion than both Europeana
and Google, distributing and mirroring content across multiple servers, and
distributing labor and responsibility in a system that is on the one hand more
robust, more redundant, and more resistant to any single point of failure or
control, and on the other hand more ephemeral, without a central point of
back-up. Indeed, some forms of shadow libraries exist entirely without a
center, instead operating infrastructurally along communication channels in
social media; for example, the use of the Twitter hashtag #ICanHazPDF to help
pirate scientific papers.

Today, shadow libraries exist as timely reminders of the infrapolitical nature
of mass digitization. They appear as hypertrophied versions of the access
provided by Google Books and Europeana. More fundamentally, they also exist as
political symptoms of the ideologies of the digital, characterized by ideals
of velocity and connectivity. As such, we might say that although shadow
libraries often position themselves as subversives, in many ways they also
belong to the same storyline as other mass digitization projects such as
Google Books and Europeana. Significantly, then, shadow libraries are
infrapolitical in two senses: first, they have become central infrastructural
elements in what James C. Scott calls the “infrapolitics of subordinate
groups,” providing everyday resistance by creating entrance points to
hitherto-excluded knowledge zones.7 Second, they represent and produce the
infrapolitics of the digital _tout court_ with their ideals of real-time,
globalized, and unhindered access.

## Lib.ru

Lib.ru is one of the earliest known digital shadow libraries. It was
established by the Russian computer science professor Maxim Moshkov, who
complemented his academic practice of programming with a personal hobby of
file-sharing on the so-called RuNet, the Russian-language segment of the
Internet.8 Moshkov’s collection had begun as an e-book swapping practice in
1990, but in 1994 he uploaded the material to his institute’s web server where
he then divided the site into several section such as “my hobbies,” “my work,”
and “my library.”9 If lib.ru began as a private project, however, the role of
Moshkov’s library soon changed as it quickly became Russia’s preferred shadow
library, with users playing an active role in its expansion by constantly
adding new digitized books. Users would continually scan and submit new texts,
while Moshkov, in his own words, worked as a “receptionist” receiving and
handling the material.10

Shadow libraries such as Moshkov’s were most likely born not only out of a
love of books, but also out of frustration with Russia’s lack of access to up-
to-date and affordable Western works.11 As they continued to grow and gain in
popularity, shadow libraries thus became not only points of access, but also
signs of infrastructural failure in the formal library system.12 After lib.ru
outgrew its initial server storage at Moshkov’s institute, Moshkov divided it
into smaller segments that were then distributed, leaving only the Russian
literary classics on the original site.13 Neighboring sites hosted other
genres, ranging from user-generated texts and fan fiction on a shadow site
called [samizdat.lib.ru](http://samizdat.lib.ru) to academic books in a shadow
library titled Kolkhoz, named after the commons-based agricultural cooperative
of the early Soviet era and curated and managed by “amateur librarians.”14 The
steadily accumulating numbers of added works, digital distributors, and online
access points expanded not only the range of the shadow collections, but also
their networked affordances. Lib.ru and its offshoots thus grew into an
influential node in the global mass digitization landscape, attracting both
political and legal attention.

### Lib.ru and the Law

Until 2004, lib.ru deployed a practice of handling copyright complaints by
simply removing works at the first request from the authors.15 But in 2004 the
library received its first significant copyright claim from the big Russian
publisher Kirill i Mefody (KM). KM requested that Moshkov remove access to a
long list of books, claiming exclusive Internet rights on the books, along
with works that were considered public domain. Moshkov refused to honor the
request, and a lawsuit ensued. The Ostankino Court of Moscow initially denied
the lawsuit because the contracts for exclusive Internet rights were
considered invalid. This did not deter KM, however, which then approached the
case from a different perspective, filing applications on behalf of well-known
Russian authors, including the crime author Alexandra Marinina and the science
fiction writer Eduard Gevorkyan. In the end, only Eduard Gevorkyan maintained
his claim, which was of the considerable size of one million rubles.16

During the trial, Moshkov’s library received widespread support from both
technologists and users of lib.ru, expressed, for example, in a manifesto
signed by the International Union of Internet Professionals, which among other
things touched upon the importance of online access not only to cultural works
but also to the Russian language and culture:

> Online libraries are an exceptionally large intellectual fund. They lessen
the effect of so-called “brain drain,” permitting people to stay in the orbit
of Russian language and culture. Without online libraries, the useful effect
of the Internet and computers in Russian education system is sharply lowered.
A huge, openly available mass of Russian literary texts is a foundation
permitting further development of Russian-language culture, worldwide.17

Emphasizing that Moshkov often had an agreement with the authors he put
online, the manifesto also called for a more stable model of online public
libraries, noting that “A wide list of authors who explicitly permitted
placing their works in the lib.ru library speaks volumes about the
practicality of the scheme used by Maxim Moshkov. However, the litigation
underway shows its incompleteness and weak spots.”18 Significantly, Moshkov’s
shadow library also received both moral and financial support from the state,
more specifically in the form of funding of one million rubles granted by the
Federal Agency for the Press and Mass Media. The funding came with the
following statement from the Agency’s chairman, Mikhail Seslavinsky:
“Following the lively discussion on how copyright could be protected in
electronic libraries, we have decided not to wait for a final decision and to
support the central library of RuNet—Maxim Moshkov’s site.”19 Seslavinsky’s
support not only reflected the public’s support of the digital library, but
also his own deep-seated interests as a self-confessed bibliophile, council
chair of the Russian organization National Union of Bibliophiles since 2011,
and author of numerous books on bibliology and bibliophilia. Additionally, the
support also reflected the issues at stake for the Russian legislative
framework on copyright. The framework had just passed a second reading of a
revised law “On Copyright and Related Rights” in the Russian parliament on
April 21, 2004, extending copyright from 50 years after an author’s death to
70 years, in accordance with international law and as a condition of Russia’s
entry into the World Trade Organization.20

The public funding, Moshkov stated, was spent on modernizing the technical
equipment for the shadow library, including upgrading servers and performing
OCR scanning on select texts.21 Yet, despite the widespread support, Moshkov
lost the copyright case to KM on May 31, 2005. The defeat was limited,
however. Indeed, one might even read the verdict as a symbolic victory for
Moshkov, as the court fined Moshkov only 30,000 rubles, a fragment of what KM
had originally sued for. The verdict did have significant consequences for how
Moshkov manages lib.ru, however. After the trial, Moshkov began extending his
classical literature section and stopped uploading books sent by readers into
his collection, unless they were from authors who submitted them because they
wished to publish in digital form.

What can we glean from the story of lib.ru about the infrapolitics of mass
digitization? First, the story of lib.ru illustrates the complex and
contingent historical trajectory of shadow libraries. Second, as the next
section shows, it offers us the possibility of approaching shadow libraries
from an infrastructural perspective, and exploring the infrapolitical
dimensions of shadow libraries in the area of tension between resistance and
standardization.

### The Infrapolitics of Lib.ru: Infrastructures of Culture and Dissent

While global in reach, lib.ru is first and foremost a profoundly
territorialized project. It was born out of a set of political, economic, and
aesthetic conditions specific to Russia and carries the characteristics of its
cultural trajectory. First, the private governance of lib.ru, initially
embodied by Moshkov, echoes the general development of the Internet in Russia
from 1991 to 1998, which was constructed mainly by private economic and
cultural initiatives at a time when the state was in a period of heavy
transition. Lib.ru’s minimalist programming style also made it a cultural
symbol of the early RuNet, acting as a marker of cultural identity for Russian
Internet users at home and abroad.22

The infrapolitics of lib.ru also carry the traits of the media politics of
Russia, which has historically been split into two: a political and visible
level of access to cultural works (through propaganda), and an infrapolitical
invisible level of contestation and resistance, enabling Russian media
consumers to act independently from official institutionalized media channels.
Indeed, some scholars tie the practice of shadow libraries to the Soviet
Union’s analog shadow activities, which are often termed _samizdat_ , that is,
illegal cultural distribution, including illegally listening to Western radio,
illegally trafficking Western music, and illegally watching Western films.23
Despite often circulating Western pop culture, the late-Soviet era samizdat
practices were often framed as noncapitalist practices of dissent without
profit motives.24 The dissent, however, was not necessarily explicitly
expressed. Lacking the defining fervor of a clear political ideology, and
offering no initiatives to overthrow the Soviet regime, samizdat was rather a
mode of dissent that evaded centralized ideological control. Indeed, as
Aleksei Yurchak notes, samizdat practices could even be read as a mode of
“suspending the political,” thus “avoiding the political concerns that had a
binary logic determined by the sovereign state” to demonstrate “to themselves
and to others that there were subjects, collectivities, forms of life, and
physical and symbolic spaces in the Soviet context that, without being overtly
oppositional or even political, exceeded that state’s abilities to define,
control, and understand them.”25 Yurchak thus reminds us that even though
samizdat was practiced as a form of nonpolitical practice, it nevertheless
inherently had significant political implications.

The infrapolitics of samizdat not only referred to a specific social practice
but were also, as Ann Komaromi reminds us, a particular discourse network
rooted in the technology of the typewriter: “Because so many people had their
own typewriters, the production of samizdat was more individual and typically
less linked to ideology and organized political structures. … The circulation
of Samizdat was more rhizomatic and spontaneous than the underground
press—samizdat was like mushroom ‘spores.’”26 The technopolitical
infrastructure of samizdat changed, however, with the fall of the Berlin Wall
in 1989, the further decentralization of the Russian media landscape, and the
emergence of digitization. Now, new nodes emerged in the Russian information
landscape, and there was no centralized authority to regulate them. Moreover,
the transmission of the Western capitalist system gave rise to new types of
shadow activity that produced items instead of just sharing items, adding a
new consumerist dimension to shadow libraries. Indeed, as Kuznetsov notes, the
late-Soviet samizdat created a dynamic textual space that aligned with more
general tendencies in mass digitization where users were “both readers and
librarians, in contrast to a traditional library with its order, selection,
and strict catalogisation.”27

If many of the new shadow libraries that emerged in the 1990s and 2000s were
inspired by the infrapolitics of samizdat, then, they also became embedded in
an infrastructural apparatus that was deeply nested within a market economy.
Indeed, new digital libraries emerged under such names as Aldebaran,
Fictionbook, Litportal, Bookz.ru, and Fanzin, which developed new platforms
for the distribution of electronic books under the label “Liters,” offering
texts to be read free of charge on a computer screen or downloaded at a
cost.28 In both cases, the authors receive a fee, either from the price of the
book or from the site’s advertising income. Accompanying these new commercial
initiatives, a concomitant movement rallied together in the form of Librusek,
a platform hosted on a server in Ecuador that offered its users the
possibility of uploading works on a distributed basis.29 In contrast to
Moshkov’s centralized control, then, the library’s operator Ilya Larin adhered
to the international piracy movement, calling his site a pirate library and
gracing Librusek’s website with a small animated pirate, complete with sabre
and parrot.

The integration and proliferation of samizdat practices into a complex
capitalist framework produced new global readings of the infrapolitics of
shadow libraries. Rather than reading shadow libraries as examples of late-
socialist infrapolitics, scholars also framed them as capitalist symptoms of
“market failure,” that is, the failure of the market to meet consumer
demands.30 One prominent example of such a reading was the influential Social
Science Research Council report edited by Joe Karaganis in 2006, titled “Media
Piracy in Emerging Economies,” which noted that cultural piracy appears most
notably as “a failure to provide affordable access to media in legal markets”
and concluded that within the context of developing countries “the pirate
market cannot be said to compete with legal sales or generate losses for
industry. At the low end of the socioeconomic ladder where such distribution
gaps are common, piracy often simply is the market.”31

In the Western world, Karaganis’s reading was a progressive response to the
otherwise traditional approach to media piracy as a legal failure, which
argued that tougher laws and increased enforcement are needed to stem
infringing activity. Yet, this book argues that Karaganis’s report, and the
approach it represents, also frames the infrapolitics of shadow libraries
within a consumerist framework that excises the noncommercial infrapolitics of
samizdat from the picture. The increasing integration of Russian media
infrapolitics into Western apparatuses, and the reframing of shadow libraries
from samizdat practices of political dissent to market failure, situates the
infrapolitics of shadow libraries within a consumerist dispositive and the
individual participants as consumers. As some critical voices suggest, this
has an impact on the political potential of shadow libraries because they—in
contrast to samizdat—actually correspond “perfectly to the industrial
production proper to the legal cultural market production.”32 Yet, as the
final section in this chapter shows, one also risks missing the rich nuances
of infrapolitics by conflating consumerist infrastructures with consumerist
practice.33

The political stakes of shadow libraries such as lib.ru illustrate the
difficulties in labeling shadow libraries in political terms, since they are
driven neither by pure globalized dissent nor by pure globalized and
commodified infrastructures. Rather, they straddle these binaries as
infrapolitical entities, the political dynamics of which align both with
standardization and dissent. Revisiting once more the theoretical debate, the
case of lib.ru shows that shadow libraries may certainly be global phenomena,
yet one should be careful with disregarding the specific cultural-political
trajectories that shape each individual shadow library. Lib.ru demonstrates
how the infrapolitics of shadow libraries emerge as infrastructural
expressions of the convergence between historical sovereign trajectories,
global information infrastructures, and public-private governance structures.
Shadow libraries are not just globalized projects that exist in parallel to
sovereign state structures and global economic flows. Instead, they are
entangled in territorial public-private governance practices that produce
their own late-sovereign infrapolitics, which, paradoxically, are embedded in
larger mass digitization problematics, both on their own territory and on the
global scene.

## Monoskop

In contrast to the broad and distributed infrastructure of lib.ru, other
shadow libraries have emerged as specialized platforms that cater to a
specific community and encourage a specific practice. Monoskop is one such
shadow library. Like lib.ru, Monoskop started as a one-man project and in many
respects still reflects its creator, Dušan Barok, who is an artist, writer,
and cultural activist involved in critical practices in the fields of
software, art, and theory. Prior to Monoskop, his activities were mainly
focused on the Bratislava cultural media scene, and Monoskop was among other
things set up as an infrastructural project, one that would not only offer
content but also function as a form of connectivity that could expand the
networked powers of the practices of which Barok was a part.34 In particular,
Barok was interested in researching the history of media art so that he could
frame the avant-garde media practices in which he engaged in Bratislava within
a wider historical context and thus lend them legitimacy.

### The Shadow Library as a Legal Stratagem

Monoskop was partly motivated by Barok’s own experiences of being barred from
works he deemed of significance to the field in which he was interested. As he
notes, the main impetus to start a blog “came from a friend who had access to
PDFs of books I wanted to read but could not afford go buy as they were not
available in public libraries.”35 Barok thus began to work on Monoskop with a
group of friends in Bratislava, initially hiding it from search engine bots to
create a form of invisibility that obfuscated its existence without, however,
preventing people from finding the Log and uploading new works. Information
about the Log was distributed through mailing lists on Internet culture, among
many other posts on e-book torrent trackers, DC++ networks, extensive
repositories such as LibGen and Aaaaarg, cloud directories, document-sharing
platforms such as Issuu and Scribd, and digital libraries such as the Internet
Archive and Project Gutenberg.36 The shadow library of Monoskop thus slowly
began to emerge, partly through Barok’s own efforts at navigating email lists
and downloading material, and partly through people approaching Monoskop
directly, sending it links to online or scanned material and even offering it
entire e-book libraries. Rather than posting these “donated” libraries in
their entirety, however, Barok and his colleagues edited the received
collection and materials so that they would fit Monoskop’s scope, and they
also kept scanning material themselves.

Today Monoskop hosts thematically curated collections of downloadable books on
art, culture, media studies, and other topics, partly in order to stimulate
“collaborative studies of the arts, media, and humanities.”37 Indeed, Monoskop
operates with a _boutique_ approach, offering relatively small collections of
personally selected publications to a steady following of loyal patrons who
regularly return to the site to explore new works. Its focal points are
summarized by its contents list, which is divided into three main categories:
“Avant-garde, modernism and after,” “Media culture,” and “Media, theory and
the humanities.” Within these three broad focal points, hundreds of links
direct the user to avant-garde magazines, art exhibitions and events, art and
design schools, artistic and cultural themes, and cultural theorists.
Importantly, shadow libraries such as Monoskop do not just host works
unbeknownst to the authors—authors also leak their own works. Thus, some
authors publishing with brand name, for-profit, all-rights-reserving, print-
on-paper-only publishing houses will also circulate a copy of their work on a
free text-sharing network such as Monoskop. 38

How might we understand Monoskop’s legal situation and maneuverings in
infrapolitical terms? Shadow libraries such as Monoskop draw their
infrapolitical strength not only from the content they offer but also from
their mode of engagement with the gray zones of new information
infrastructures. Indeed, the infrapolitics of shadow libraries such as
Monoskop can perhaps best be characterized as a stratagematic form of
infrapolitics. Monoskop neither inhabits the passive perspective of the
digital spectator nor deploys a form of tactics that aims to be failure free.
Rather, it exists as a body of informal practices and knowledges, as cunning
and dexterous networks that actively embed themselves in today’s
sociotechnical infrastructures. It operates with high sociotechnical
sensibilities, living off of the social relations that bring it into being and
stabilize it. Most significantly, Monoskop skillfully exploits the cracks in
the infrastructures it inhabits, interchangeably operating, evading, and
accompanying them. As Matthew Fuller and Andrew Goffey point out in their
meditation on stratagems in digital media, they do “not cohere into a system”
but rather operate as “extensive, open-ended listing[s]” that “display a
certain undecidability because inevitably a stratagem does not describe or
prescribe an action that is certain in its outcome.”39 Significantly, then,
failures and errors not only represent negative occurrences in stratagematic
approaches but also appeal to willful dissidents as potentially beneficial
tools. Dušan Barok’s response to a question about the legal challenges against
Monoskop evidences this stratagematic approach, as he replies that shadow
libraries such as Monoskop operate in the “gray zone,” which to him is also
the zone of fair use.40 Barok thus highlights the ways in which Monoskop
engages with established media infrastructures, not only on the level of
discursive conventions but also through their formal logics, technical
protocols, and social proprieties.

Thus, whereas Google lights up gray zones through spectacle and legal power
plays, and Europeana shuns gray zones in favor of the law, Monoskop literally
embraces its shadowy existence in the gray zones of the law. By working in the
shadows, Monoskop and likeminded operations highlight the ways in which the
objects they circulate (including the digital artifacts, their knowledge
management, and their software) can be manipulated and experimented upon to
produce new forms of power dynamics.41 Their ethics lie more in the ways in
which they operate as shadowy infrastructures than in intellectual reflections
upon the infrastructures they counter, without, however, creating an
opposition between thinking and doing. Indeed, as its history shows, Monoskop
grew out of a desire to create a space for critical reflection. The
infrapolitics of Monoskop is thus an infrapolitics of grayness that marks the
breakdown of clearly defined contrasts between legal and illegal, licit and
illicit, desire and control, instead providing a space for activities that are
ethically ambiguous and in which “everyone is sullied.”42

### Monoskop as a Territorializing Assemblage

While Monoskop’s stratagems play on the infrapolitics of the gray zones of
globalized digital networks, the shadow library also emerges as a late-
sovereign infrastructure. As already noted, Monoskop was from the outset
focused on surfacing and connecting art and media objects and theory from
Central and Eastern Europe. Often, this territorial dimension recedes into the
background, with discussions centering more on the site’s specialized catalog
and legal maneuvers. Yet Monoskop was initially launched partly as a response
to criticisms on new media scenes in the Slovak and Czech Republics as
“incomprehensible avant-garde.”43 It began as a simple invite-only instance of
wiki in August 2004, urging participants to collaboratively research the
history of media art. It was from the beginning conceived more as a
collaborative social practice and less as a material collection, and it
targeted noninstitutionalized researchers such as Barok himself.

As the nodes in Monoskop grew, its initial aim to research media art history
also expanded into looking at wider cultural practices. By 2010, it had grown
into a 100-gigabyte collection which was organized as a snowball research
collection, focusing in particular on “the white spots in history of art and
culture in East-Central Europe,” spanning “dozens of CDs, DVDs, publications,
as well as recordings of long interviews [Barok] did”44 with various people he
considered forerunners in the field of media arts. Indeed, Barok at first had
no plans to publish the collection of materials he had gathered over time. But
during his research stay in Rotterdam at the influential Piet Zwart Institute,
he met the digital scholars Aymeric Mansoux and Marcell Mars, who were both
active in avant-garde media practices, and they convinced him to upload the
collection.45 Due to the fragmentary character of his collection, Barok found
that Monoskop corresponded well with the pre-existing wiki, to which he began
connecting and embedding videos, audio clips, image files, and works. An
important motivating factor was the publication of material that was otherwise
unavailable online. In 2009, Barok launched Monoskop Log, together with his
colleague Tomáš Kovács. This site was envisioned as an affiliated online
repository of publications for Monoskop, or, as Barok terms it, “a free access
living archive of writings on art, culture, and media technologies.”46

Seeking to create situated spaces of reflection and to shed light on the
practices of media artists in Eastern and Central Europe, Monoskop thus
launched several projects devoted to excavating media art from a situated
perspective that takes its local history into account. Today, Monoskop remains
a rich source of information about artistic practices in Central and Eastern
Europe, Poland, Hungary, Slovakia, and the Czech Republic, relating it not
only to the art histories of the region, but also to its history of
cybernetics and computing.

Another early motivation for Monoskop was to provide a situated nodal point in
the globalized information infrastructures that emphasized the geographical
trajectories that had given rise to it. As Dušan Barok notes in an interview,
“For a Central European it is mind-boggling to realize that when meeting a
person from a neighboring country, what tends to connect us is not only
talking in English, but also referring to things in the far West. Not that the
West should feel foreign, but it is against intuition that an East-East
geographical proximity does not translate into a cultural one.”47 From this
perspective, Monoskop appears not only as an infrapolitical project of global
knowledge, but also one of situated sovereignty. Yet, even this territorial
focus holds a strategic dimension. As Barok notes, Monoskop’s ambition was not
only to gain new knowledge about media art in the region, but also to cash in
on the cultural capital into which this knowledge could potentially be
converted. Thus, its territorial matrix first and foremost translates into
Foucault’s famous dictum that “knowledge is power.” But it is nevertheless
also testament to the importance of including more complex spatial dynamics in
one’s analytical matrix of shadow libraries, if one wishes to understand them
as more than globalized breakers of code and arbiters of what Manuel Castells
once called the “space of flows.”48

## UbuWeb

If Monoskop is one of the most comprehensive shadow libraries to emerge from
critical-artistic practice, UbuWeb is one of the earliest ones and has served
as an inspirational example for Monoskop. UbuWeb is a website that offers an
encyclopedic scope of downloadable audio, video, and plain-text versions of
avant-garde art recordings, films, and books. Most of the books fall in the
category of small-edition artists’ books and are presented on the site with
permission from the artists in question, who are not so concerned with
potential loss of revenue since most of the works are officially out of print
and never made any money even when they were commercially available. At first
glance, UbuWeb’s aesthetics appear almost demonstratively spare. Still
formatted in HTML, it upholds a certain 1990s net aesthetics that has resisted
the revamps offered by the new century’s more dynamic infrastructures. Yet, a
closer look reveals that UbuWeb offers a wealth of content, ranging from high
art collections to much more rudimentary objects. Moreover, and more
fundamentally, its critical archival practice raises broader infrapolitical
questions of cultural hierarchies, infrastructures, and domination.

### Shadow Libraries between Gift Economies and Marginalized Forms of
Distribution

UbuWeb was founded by poet Kenneth Goldsmith in response to the marginal
distribution of crucial avant-garde material. It provides open access both to
out-of-print works that find a second life through digital art reprint and to
the work of contemporary artists. Upon its opening in 2001, Kenneth Goldsmith
termed UbuWeb’s economic infrastructure a “gift economy” and framed it as a
political statement that highlighted certain problems in the distribution of
and access to intellectual materials:

> Essentially a gift economy, poetry is the perfect space to practice utopian
politics. Freed from profit-making constraints or cumbersome fabrication
considerations, information can literally “be free”: on UbuWeb, we give it
away. … Totally independent from institutional support, UbuWeb is free from
academic bureaucracy and its attendant infighting, which often results in
compromised solutions; we have no one to please but ourselves. … UbuWeb posts
much of its content without permission; we rip full-length CDs into sound
files; we scan as many books as we can get our hands on; we post essays as
fast as we can OCR them. And not once have we been issued a cease and desist
order. Instead, we receive glowing emails from artists, publishers, and record
labels finding their work on UbuWeb, thanking us for taking an interest in
what they do; in fact, most times they offer UbuWeb additional materials. We
happily acquiesce and tell them that UbuWeb is an unlimited resource with
unlimited space for them to fill. It is in this way that the site has grown to
encompass hundreds of artists, thousands of files, and several gigabytes of
poetry.49

At the time of its launch, UbuWeb garnered extraordinary attention and divided
communities along lines of access and rights to historical and contemporary
artists’ media. It was in this range of responses to UbuWeb that one could
discern the formations of new infrastructural positions on digital archives,
how they should be made available, and to whom. Yet again, these legal
positions were accompanied by a territorial dynamic, including the impact of
regional differences in cultural policy on UbuWeb. Thus, as artist Jason Simon
notes, there were significant differences between the ways in which European
and North American distributors related to UbuWeb. These differences, Simon
points out, were rooted in “medium-specific questions about infrastructure,”
which differ “from the more interpretive discussion that accompanied video's
wholesale migration into fine art exhibition venues.”50 European pre-recession
public money thus permitted nonprofit distributors to embrace infrastructures
such as UbuWeb, while American distributors were much more hesitant toward
UbuWeb’s free-access model. When recession hit Europe in the late 2000s,
however, the European links to UbuWeb’s infrastructures crumbled while “the
legacy American distributors … have been steadily adapting.”51 The territorial
modulations in UbuWeb’s infrastructural set-up testify not only to how shadow
libraries such as UbuWeb are inherently always linked up to larger political
events in complex ways, but also to latent ephemerality of the entire project.

Goldsmith has more than once asserted that UbuWeb’s insistence on
“independent” infrastructures also means a volatile existence: “… by the time
you read this, UbuWeb may be gone. Cobbled together, operating on no money and
an all-volunteer staff, UbuWeb has become the unlikely definitive source for
all things avant-garde on the internet. Never meant to be a permanent archive,
Ubu could vanish for any number of reasons: our ISP pulls the plug, our
university support dries up, or we simply grow tired of it.” Goldsmith’s
emphasis on the ephemerality of UbuWeb is a shared condition of most shadow
libraries, most of which exist only as ghostly reminders with nonfunctional
download links or simply as 404 pages, once they pull the plug. Rather than
lamenting this volatile existence, however, Goldsmith embraces it as an
infrapolitical stance. As Cornelia Solfrank points out, UbuWeb was—and still
is—as much an “archival critical practice that highlights the legal and social
ramifications of its self-created distribution and archiving system as it is
about the content hosted on the site.”52 UbuWeb is thus not so much about
authenticity as it is about archival defiance, appropriation, and self-
reflection. Such broader and deeper understandings of archival theory and
practice allow us to conceive of it as the kind of infrapolitics that,
according to James C. Scott, “provides much of the cultural and structural
underpinning of the more visible political attention on which our attention
has generally been focused.”53 The infrapolitics of UbuWeb is devoted to
hatching new forms of organization, creating new enclaves of freedom in the
midst of orthodox ways of life, and inventing new structures of production and
dissemination that reveal not only the content of their material but also
their marginalized infrastructural conditions and the constellation of social
forces that lead to their online circulation.54

The infrapolitics of UbuWeb is testament not only to avant-garde cultures, but
also to what Hito Steyerl in her _Defense of Poor Images_ refers to as the
“neoliberal radicalization of the culture as commodity” and the “restructuring
of global media industries.” 55 These materials “circulate partly in the void
left by state organizations” that find it too difficult to maintain digital
distribution infrastructures and the art world’s commercial ecosystems, which
offer the cultural materials hosted on UbuWeb only a liminal existence. Thus,
while UbuWeb on the one hand “reveals the decline and marginalization of
certain cultural materials” whose production were often “considered a task of
the state,”56 on the other hand it shows how intellectual content is
increasingly privatized, not only in corporate terms but also through
individuals, which in UbuWeb’s case is expressed in Kenneth Goldsmith, who
acts as the sole archival gatekeeper.57

## The Infrapolitics of Shadow Libraries

If the complexity of shadow libraries cannot be reduced to the contrastive
codes of “right” and “wrong” and global-local binaries, the question remains
how to theorize the cultural politics of shadow libraries. This final section
outlines three central infrapolitical aspects of shadow libraries: access,
speed, and gift.

Mass digitization poses two important questions to knowledge infrastructures:
a logistical question of access and a strategic question of to whom to
allocate that access. Copyright poses a significant logistical barrier between
users and works as a point of control in the ideal free flow of information.
In mass digitization, increased access to information stimulates projects,
whereas in publishing industries with monopoly possibilities, the drive is
toward restriction and control. The uneasy fit between copyright regulations
and mass digitization projects has, as already shown, given rise to several
conflicts, either as legal battles or as copyright reform initiatives arguing
that current copyright frameworks cast doubt upon the political ideal of total
access. As with Europeana and Google Books, the question of _access_ often
stands at the core of the infrapolitics of shadow libraries. Yet, the
strategic responses to the problem of copyright vary significantly: if
Europeana moves within the established realm of legality to reform copyright
regulations and Google Books produces claims to new cultural-legal categories
such as “nonconsumptive reading,” shadow libraries offer a third
infrastructural maneuver—bypassing copyright infrastructures altogether
through practices of illicit file distribution.

Shadow libraries elicit a range of responses and discourses that place
themselves on a spectrum between condemnation and celebration. The most
straightforward response comes, unsurprisingly, from the publishing industry,
highlighting the fundamentally violent breaches of the legal order that
underpins the media industry. Such responses include legal action, policy
initiatives, and public campaigns against piracy, often staging—in more or
less explicit terms—the “pirate” as a common enemy of mankind, beyond legal
protection and to be fought by whatever means necessary.

The second response comes from the open source movement, represented among
others by the pro-reform copyright movement Creative Commons (CC), whose
flexible copyright framework has been adopted by both Europeana and Google
Books.58 While the open source movement has become a voice on behalf of the
telos of the Internet and its possibilities of offering free and unhindered
access, its response to shadow libraries has revealed the complex
infrapolitics of access as a postcolonial problematic. As Kavita Philip
argues, CC’s founder Lawrence Lessig maintains the image of the “good” Western
creative vis-à-vis the “bad” Asian pirate, citing for instance his statement
in his influential book _Free Culture_ that “All across the world, but
especially in Asia and Eastern Europe, there are businesses that do nothing
but take other people’s copyrighted content, copy it, and sell it. … This is
piracy plain and simple, … This piracy is wrong.” 59 Such statements, Kavita
Philip argues, frames the Asian pirate as external to order, whether it be the
order of Western law or neoliberalism.60

The postcolonial critique of CC’s Western normative discourse has instead
sought to conceptualize piracy, not as deviatory behavior in information
economies, but rather as an integral infrastructure endemic to globalized
information economies.61 This theoretical development offers valuable insights
for understanding the infrapolitics of shadow libraries. First of all, it
allows us to go beyond moral discussions of shadow libraries, and to pay
attention instead to the ways in which their infrastructures are built, how
they operate, and how they connect to other infrastructures. As Lawrence Liang
points out, if infrastructures traditionally belong to the domain of the
state, often in cooperation with private business, pirate infrastructures
operate in the gray zones of this set-up, in much the same way as slums exist
as shadow cities and copies are regarded as shadows of the original.62
Moreover, and relatedly, it reminds us of the inherently unstable form of
shadow libraries as a cultural construct, and the ways in which what gets
termed piracy differs across cultures. As Brian Larkin notes, piracy is best
seen as emerging from specific domains: dynamic localities with particular
legal, aesthetic, and social assemblages.63 In a final twist, research on
users of shadow libraries shows that usage of shadow libraries is distributed
globally. Multiple sources attest to the fact that most Sci-Hub usage occurs
outside the Anglosphere. According to Alexa Internet analytics, the top five
country sources of traffic to Sci-Hub were China, Iran, India, Brazil, and
Japan, which account for 56.4 percent of recent traffic. As of early 2016,
data released by Sci-Hub’s founder Alexandra Elbakyan also shows high usage in
developed countries, with a large proportion of the downloads coming from the
US and countries within the European Union.64 The same tendency is evident in
the #ICanHazPDF Twitter phenomenon, which while framed as “civil disobedience”
to aid users in the Global South65 nevertheless has higher numbers of posts
from the US and Great Britain.66

This brings us to the second cultural-political production, namely the
question of distribution. In their article “Book Piracy as Peer Preservation,”
Denis Tenen and Maxwell Henry Foxman note that rather than condemning book
piracy _tout court_ , established libraries could in fact learn from the
infrastructural set-ups of shadow libraries in relation to participatory
governance, technological innovation, and economic sustainability.67 Shadow
libraries are often premised upon an infrastructure that includes user
participation without, however, operating in an enclosed sphere. Often, shadow
libraries coordinate their actions by use of social media platforms and online
forums, including Twitter, Reddit, and Facebook, and the primary websites used
to host the shared files are AvaxHome, LibGen, and Sci-Hub. Commercial online
cloud storage accounts (such as Dropbox and Google Drive) and email are also
used to share content in informal ways. Users interested in obtaining an
article or book chapter will disseminate their request over one or more of the
platforms mentioned above. Other users of those platforms try to get the
requested content via their library accounts or employer-provided access, and
the actual files being exchanged are often hosted on other websites or emailed
to the requesting users. Through these networks, shadow libraries offer
convenient and speedy access to books and articles. Little empirical evidence
is available, but one study does indicate that a large number of shadow
library downloads are made because obtaining a PDF from a shadow library is
easier than using the legal access methods offered by a university’s
traditional channels of access, including formalized research libraries.68
Other studies indicate, however, that many downloads occur because the users
have (perceived) lack of full-text access to the desired texts.69

Finally, as indicated in the introduction to this chapter, shadow libraries
produce what we might call a cultural politics of parasitism. In the normative
model of shadow libraries, discourse often centers upon piracy as a theft
economy. Other discourses, drawing upon anthropological sources, have pointed
out that peer-to-peer file-sharing sites in reality organize around a gift
economy, that is, “a system of social solidarity based on a structured set of
gift exchange and social relationships among consumers.”70 This chapter,
however, ends with a third proposal: that shadow libraries produce a
parasitical form of infrapolitics. In _The Parasite_ , philosopher Michel
Serres speculates a way of thinking about relations of transfer—in social,
biological, and informational contexts—as fundamentally parasitic, that is, a
subtractive form of “taking without giving.” Serres contrasts the parasitic
model with established models of society based on notions such as exchange and
gift giving.71 Shadow libraries produce an infrapolitics that denies the
distinction between producers and subtractors of value, allowing us instead to
focus on the social roles infrastructural agents perform. Restoring a sense of
the wider context of parasitism to shadow libraries does not provide a clear-
cut solution as to when and where shadow libraries should be condemned and
when and where they should be tolerated. But it does help us ask questions in
a different way. And it certainly prevents the regarding of shadow libraries
as the “other” in the landscape of mass digitization. Shadow libraries
instigate new creative relations, the dynamics of which are infrastructurally
premised upon the medium they use. Just as typewriters were an important
component of samizdat practices in the Soviet Union, digital infrastructures
are central components of shadow libraries, and in many respects shadow
libraries bring to the fore the same cultural-political questions as other
forms of mass digitization: questions of territorial imaginaries,
infrastructures, regulation, speed, and ethics.

## Notes

1. Serres 1982, 55. 2. Serres 1982, 36. 3. Serres 1982, 36. 4. Samyn 2012. 5.
I stick with “shadow library,” a term that I first found in Lawrence Liang’s
(2012) writings on copyright and have since seen meaningfully unfolded in a
variety of contexts. Part of its strength is its sidestepping of the question
of the pirate and that term’s colonial connotations. 6. Eckstein and Schwarz
2014. 7. Scott 2009, 185–201. 8. See also Maxim Moshkov’s own website hosted
on lib.ru, . 9. Carey 2015. 10. Schmidt 2009. 11. Bodó
2016. “Libraries in the post-scarcity era.” As Balazs Bodó notes, the first
Russian mass-digitized shadow archives in Russia were run by professors from
the hard sciences, but the popularization of computers soon gave rise to much
more varied and widespread shadow library terrain, fueled by “enthusiastic
readers, book fans, and often authors, who spared no effort to make their
favorite books available on FIDOnet, a popular BBS system in Russia.” 12.
Stelmakh 2008, 4. 13. Bodó 2016. 14. Bodó 2016. 15. Vul 2003. 16. “In Defense
of Maxim Moshkov's Library,” n.d., The International Union of Internet
Professionals, . 17. Ibid. 18. Ibid. 19.
Schmidt 2009, 7. 20. Ibid. 21. Carey 2015. 22. Mjør 2009, 84. 23. Bodó 2015.
24. Kiriya 2012. 25. Yurchak 2008, 732. 26. Komaromi, 74. 27. Mjør, 85. 28.
Litres.ru, . 29. Library Genesis,
. 30. Kiriya 2012. 31. Karaganis 2011, 65, 426. 32.
Kiriya 2012, 458. 33. For a great analysis of the late-Soviet youth’s
relationship with consumerist products, read Yurchak’s careful study in
_Everything Was Forever, Until It Was No More: The Last Soviet Generation_
(2006). 34. “Dušan Barok: Interview,” _Neural_ 44 (2010), 10. 35. Ibid. 36.
Ibid. 37. Monoskop,” last modified March 28, 2018, Monoskop.
. . 38. “Dušan
Barok: Interview,” _Neural_ 44 (2010), 10. 39. Fuller and Goffey 2012, 21. 40.
“Dušan Barok: Interview,” _Neural_ 44 (2010), 11. 41. In an interview, Dušan
Barok mentions his inspirations, including early examples such as textz.com, a
shadow library created by the Berlin-based artist Sebastian Lütgert. Textz.com
was one of the first websites to facilitate free access to books on culture,
politics, and media theory in the form of text files. Often the format would
itself toy with legal limits. Thus, Lütgert declared in a mischievous manner
that the website would offer a text in various formats during a legal debacle
with Surhkamp Verlag: “Today, we are proud to announce the release of
walser.php (), a 10,000-line php script
that is able to generate the plain ascii version of ‘Death of a Critic.’ The
script can be redistributed and modified (and, of course, linked to) under the
terms of the GNU General Public License, but may not be run without written
permission by Suhrkamp Verlag. Of course, reverse-engineering the writings of
senile German revisionists is not the core business of textz.com, so
walser.php includes makewalser.php, a utility that can produce an unlimited
number of similar (both free as in speech and free as in copy) php scripts for
any digital text”; see “Suhrkamp recalls walser.pdf, textz.com releases
walser.php,” Rolux.org,
.
42. Fuller and Goffey 2012, 11. 43. “MONOSKOP Project Finished,” COL-ME Co-
located Media Expedition, [www.col-me.info/node/841](http://www.col-
me.info/node/841). 44. “Dušan Barok: Interview,” _Neural_ 44 (2010), 10. 45.
Aymeric Mansoux is a senior lecturer at the Piet Zwart Institute whose
research deals with the defining, constraining, and confining of cultural
freedom in the context of network-based practices. Marcel Mars is an advocate
of free software and a researcher who is also active in a shadow library named
_Public Library,_ (also interchangeably
known as Memory of the World). 46. “Dušan Barok,” Memory of the World,
. 47. “Dušan Barok: Interview,”
_Neural_ 44 (2010), 10. 48. Castells 1996. 49. Kenneth Goldsmith,”UbuWeb Wants
to Be Free” (last modified July 18, 2007),
. 50. Jacob King and
Jason Simon, “Before and After UbuWeb: A Conversation about Artists’ Film and
Video Distribution,” _Rhizome_ , February 20, 2014.
artists-film-and-vid>. 51. King and Simon 2014. 52. Sollfrank 2015. 53. Scott
1990, 184. 54. For this, I am indebted to Hito Steyerl’s essay ”In Defense of
the Poor Image,” in her book _The Wretched of the Screen_ , 31–59. 55. Steyerl
2012, 36. 56. Steyerl 2012, 39. 57. Sollfrank 2015. 58. Other significant open
source movements include Free Software Foundation, the Wikimedia Foundation,
and several open access initiatives in science. 59. Lessig 2005, 57. 60.
Philip 2005, 212. 61. See, for instance, Larkin 2008; Castells and Cardoso
2012; Fredriksson and Arvanitakis 2014; Burkart 2014; and Eckstein and Schwarz
2014. 62. Liang 2009. 63. Larkin 2008. 64. John Bohannon, “Who’s Downloading
Pirated Papers? Everyone,” _Science Magazine_ , April 28, 2016,
everyone>. 65. “The Scientists Encouraging Online Piracy with a Secret
Codeword,” _BBC Trending_ , October 21, 2015, trending-34572462>. 66. Liu 2013. 67. Tenen and Foxman 2014. 68. See Kramer
2016. 69. Gardner and Gardner 2017. 70. Giesler 2006, 283. 71. Serres 2013, 8.

# III
Diagnosing Mass Digitization

# 5
Lost in Mass Digitization

## The Desire and Despair of Large-Scale Collections

In 1995, founding editor of _Wired_ magazine Kevin Kelly mused upon how a
digital library would look:

> Two decades ago nonlibrarians discovered Borges’s Library in silicon
circuits of human manufacture. The poetic can imagine the countless rows of
hexagons and hallways stacked up in the Library corresponding to the
incomprehensible micro labyrinth of crystalline wires and gates stamped into a
silicon computer chip. A computer chip, blessed by the proper incantation of
software, creates Borges’s Library on command. … Pages from the books appear
on the screen one after another without delay. To search Borges’s Library of
all possible books, past, present, and future, one needs only to sit down (the
modern solution) and click the mouse.1

At the time of Kelly’s writing, book digitization on a massive scale had not
yet taken place. Building his chimerical dream around Jorge Luis Borges’s own
famous magic piece of speculation regarding the Library of Babel, Kelly not
only dreamed up a fantasy of what a digital library might be in an imaginary
dialogue with Borges; he also argued that Jorge Luis Borges’s vision had
already taken place, by grace of nonlibrarians, or—more
specifically—programmers. Specifically, Kelly mentions Karl Sims, a computer
scientist working on a supercomputer called Connection Machine 5 (you may
remember it from the set of _Jurassic Park_ ), who had created a simulated
version of Borges’s library.2

Twenty years after Kelly’s vision, a whole host of mass digitization projects
have sought more or less explicitly to fulfill Kelly’s vision. Incidentally,
Brewster Kahle, one of the lead engineers of the aforementioned Connection
Machine, has become a key figure in the field. Kahle has long dreamed of
creating a universal digital library, and has worked to fulfill it in
practical terms through the nonprofit Internet Archive project, which he
founded in 1996 with the stated mission of creating “universal access to all
knowledge.” In an op-ed in 2017, Kahle lamented the recent lack of progress in
mass digitization and argued for the need to create a new vision for mass
digitization, stating, “The Internet Archive, working with library partners,
proposes bringing millions of books online, through purchase or digitization,
starting with the books most widely held and used in libraries and
classrooms.”3 Reminding us that three major entities have “already digitized
modern materials at scale: Google, Amazon, and the Internet Archive, probably
in that order of magnitude,”4 Kahle nevertheless notes that “bringing
universal access to books” has not yet been achieved because of a fractured
field that diverges on questions of money, technology, and legal clarity. Yet,
outlining his new vision for how a sustainable mass digitization project could
be achieved, Kahle remains convinced that mass digitization is both a
necessity and a possibility.

While Brewster Kahle, Kevin Kelly, Google, Amazon, Europeana’s member
institutions, and others disagree on how to achieve mass digitization, for
whom, and in what form, they are all united in their quest for digitization on
a massive scale. Many shadow libraries operate with the same quantitative
statements, proudly asserting the quantities of their massive holdings on the
front page.

Given the fractured field of mass digitization, and the lack of economic
models for how to actually make mass digitization sustainable, why does the
common dream of mass digitization persist? As this chapter shows, the desire
for quantity, which drives mass digitization, is—much like the Borges stories
to which Kelly also refers—laced with ambivalence. On the one hand, the
quantitative aspirations are driven forth by the basic assumption that “more
is more”: more data and more cultural memory equal better industrial and
intellectual progress. One the other hand, the sheer scale of ambition also
causes frustration, anxiety, and failed plans.

The sense that sheer size and big numbers hold the promise of progress and
greatness is nothing new, of course. And mass digitization brings together
three fields that have each historically grown out of scalar ambitions:
collecting practices, statistics, and industrialization processes.
Historically, as cultural theorist Couze Venn reminds us, most large
collections bear the imprint of processes of (cultural) colonization, human
desires, and dynamics of domination and superiority. We therefore find in
large collections the “impulses and yearnings that have conditioned the
assembling of most of the collections that today establish a monument to past
efforts to gather together knowledge of the world and its treasury of objects
and deeds.”5 The field of statistics, moreover, so vital to the evolution of
modern governance models, is also premised upon the accumulation of ever-more
information.6 And finally, we all recognize the signs of modern
industrialization processes as they appear in the form of globalization,
standardization, and acceleration. Indeed, as French sociologist Henri
Lefebvre once argued (with a nod to Marx), the history of modern society could
plainly and simply be seen as the history of accumulation: of space, of
capital, of property.7

In mass digitization, we hear the political echoes of these histories. From
Jeanneney’s war cry to defend European patrimonies in the face of Google’s
cultural colonization to Google’s megalomaniac numbers game and Europeana’s
territorial maneuverings, scale is used as a point of reference not only to
describe the space of cultural objects in themselves but also to outline a
realm of cultural command.

A central feature in the history of accumulation and scale is the development
of digital technology and the accompanying new modes of information
organization. But even before then, the invention of new technologies offered
not only new modes of producing and gathering information and new
possibilities of organizing information assemblages, but also new questions
about the implications of these leaps in information production. As historians
Ann Blair and Peter Stallybrass show, “infolust,” that is, the cultural
attitude that values expansive collections for long-term storage, emerged in
the early Renaissance period.8 In that period, new print technology gave rise
to a new culture of accumulating and stockpiling notes and papers, even
without having a specific compositional purpose in mind. Within this scholarly
paradigm, new teleologies were formed that emphasized the latent value of any
piece of information, expressed for instance by Joachim Jungius’s exclamation
that “no field was too remote, no author too obscure that it would not yield
some knowledge or other” and Gabriel Naudé’s observation that there is “no
book, however bad or decried, which will not be sought after by someone over
time.”9 The idea that any piece of information was latently valuable was later
remarked upon by Melvin Dewey, who noted at the beginning of the twentieth
century that a “normal librarian’s instinct is to keep every book and
pamphlet. He knows that possibly some day, somebody wants it.”10

Today, mass digitization repeats similar concerns. It reworks the old dream of
an all-encompassing and universal library and has foregrounded once again
questions about what to save and what to let go. What, one might ask, would
belong in such a library? One important field of interest is the question of
whether, and how, to preserve metadata—today’s marginalia. Is it sufficient to
digitize cultural works, or should all accompanying information about the
provenance of the work also be included? And how can we agree upon what
marginalia actually is across different disciplines? Mass digitization
projects in natural history rarely digitize marginalia such as logs and
written accounts, focusing only on what to that discipline is the main object
at hand, for example, a piece of rock, a fly specimen, a pressed plant. Yet,
in the history of science, logs are an invaluable source of information about
how the collected object ended up in the collection, the meaning it had to the
collector, and the place it takes in the collection.11 In this way, new
questions with old trajectories arise: What is important for understanding a
collection and its life? What should be included and excluded? And how will we
know what will turn out to be important in the future?

In the era of big data, the imperative is often to digitize and “save all.”
Prestige mass digitization projects such as Google Books and Europeana have
thus often contextualized their importance in terms of scale. Indeed, as we
saw in the previous chapters, the question of scale has been a central point
of political contestation used to signal infrastructural power. Thus the hype
around Google Books, as well as the political ire it drew, centered on the
scale of the project just as quantitative goals are used in Europeana to
signal progress and significance. Inherent in these quantitative claims are
not only ideas about political power, but also the widespread belief in
digital circles—and the political regimes that take inspiration from them—that
the more information the user is able to access, the more empowered the user
is to navigate and make meaning on their own. In recent years, the imaginaries
of freedom of navigation have also been adjoined by fantasies of freedom of
infrastructural construction through the image of the platform. Mass
digitization projects should therefore not only offer the user the potential
to navigate collections freely, but also to build new products and services on
top of them.12 Yet, as this chapter argues, the ethos of potentially unlimited
expansion also prompts a new set of infrapolitical questions about agency and
control. While these questions are inherently related to the larger questions
of territory and power explored in the previous chapters, they occur on a
different register, closer to the individual user and within the spatialized
imaginaries of digital information.

As many critics have noted, the logic of expansion and scale, and the
accompanying fantasies of the empowered user, often builds on neoliberal
subjectification processes. While highly seductive, they often fail to take
into account the reality of social complexity. Therefore, as Lisa Nakamura
notes, the discourse of complete freedom of navigation through technological
liberation—expressed aptly in Microsoft’s famous slogan “Where do you want to
go today?”—assumes, wrongly, that everyone is at liberty to move about
unhindered.13 And the fantasy of empowerment through platforming is often also
shot through with neoliberal ideals that not only fail to take into account
the complex infrapolitical realities of social interaction, but also rely on
an entrepreneurial epistemology that evokes “a flat, two-dimensional stage on
which resources are laid out for users to do stuff with” and which we are not
“inclined to look underneath or behind it, or to question its structure.”14

This chapter unfolds these central infrapolitical problematics of the spatial
imaginaries of knowledge in relation to a set of prevalent cultural spatial
tropes that have gained new life in digital theory and that have informed the
construction and development of mass digitization projects: the flaneur, the
labyrinth, and the platform. Cultural reports, policy papers, and digital
design strategies often use these three tropes to elicit images of pleasure
and playfulness in mass digitization projects; yet, as the following sections
show, they also raise significant questions of control and agency, not least
against the backdrop of ever-increasing scales of information production.

## Too Much—Never Enough

The question of scale in mass digitization is often posed as a rational quest
for knowledge accumulation and interoperability. Yet this section argues that
digitized collections are more than just rational projects; they strike deep
affective cords of desire, domination, and anxiety. As Couze Venn reminds us,
collections harbor an intimate connection between cognition and affective
economy. In this connection, the rationalized drive to collect is often
accompanied by a slippage, from a rationalized urge to a pathological drive
ultimately associated with desire, power, domination, anxiety, nostalgia,
excess, and—sometimes even—compulsion and repetition.15 The practice of
collecting objects thus not only signals a rational need but often also
springs from desire, and as psychoanalysis has taught us, a sense of lack is
the reflection of desire. As Slavoj Zizek puts it, “desire’s _raison d’être_
is not to realize its goal, to find full satisfaction, but to reproduce itself
as desire.” 16 Therefore, no matter how much we collect, the collector will
rarely experience their collection as complete and will often be haunted by
the desire to collect more.

In addition to the frightening (yet titillating) aspect of never having our
desires satisfied, large collections also give rise to a set of information
pathologies that, while different in kind, share an understanding of
information as intimidation. The experience is generally induced by two
inherently linked factors. First, the size of the cultural collection has
historically also often implied a powerful collector with the means to gather
expensive materials from all over the world, and a large collection has thus
had the basic function of impressing and, if need be, intimidating people.
Second, large collections give rise to the sheer subjective experience of
being overwhelmed by information and a mental incapacity to take it all in.
Both factors point to questions of potency and importance. And both work to
instill a fear in the visitor. As Voltaire once noted, “a great library has
the quality of frightening those who look upon it.”17

The intimidating nature of large collections has been a favored trope in
cultural representations. The most famous example of a gargantuan, even
insanity-inducing, library is of course Jorge Luis Borges’s tale of the
Library of Babel, the universal totality of which becomes both a monstrosity
in the characters’ lives and a source of hope, depending on their willingness
to make peace and submit themselves to the library’s infinite scale and
Kafkaesque organization.18 But Borges’s nonfiction piece from 1939, _The Total
Library,_ also serves as an elegant tale of an informational nightmare. _The
Total Library_ begins by noting that the dream of the utopia of the total
library “has certain characteristics that are easily confused with virtues”
and ends with a more somber caution: “One of the habits of the mind is the
invention of horrible imaginings. … I have tried to rescue from oblivion a
subaltern horror: the vast, contradictory Library, whose vertical wildernesses
of books run the incessant risk of changing into others that affirm, deny, and
confuse everything like a delirious god.” 19

Few escape the intimidating nature of large collections. But while attention
has often been given to the citizen subjected to the disciplining force of the
sovereign state in the form of its institutions, less attention has been given
to those that have had to structure and make sense of these intimidating
collections. Until recently, cultural collections were usually oriented toward
the figure of the patron or, in more abstract geographical terms, (God-given)
patrimony. Renaissance cabinets of curiosities were meant to astonish and
dazzle; the ostentatious wealth of the Baroque museums of the seventeenth and
eighteenth centuries displayed demonstrations of Godly power; and bourgeois
museums of the nineteenth century positioned themselves as national
institutions of _Bildung_. But while cultural memory institutions have worked
first and foremost to mirror to an external audience the power and the psyche
of their owners in individual, religious, and/or geographical terms, they have
also consistently had to grapple internally with the problem of how to best
organize and display these collections.

One of the key generators of anxiety in vast libraries has been the question
of infrastructure. Each new information paradigm and each new technology has
induced new anxieties about how best to organize information. The fear of
disorder haunted both institutions and individuals. In his illustrious account
of Ephraim Chamber’s _Cyclopaedia_ (the forerunner of Denis Diderot’s and Jean
le Rond d’Alembert’s famous Enlightenment project, the _Encyclopédie_ ),
Richard Yeo thus recounts how Gottfried Leibniz complained in 1680 about “that
horrible mass of books which keeps on growing” so that eventually “the
disorder will become nearly insurmountable.”20 Five years on, the French
scholar and critic Adrien Baillet warned his readers, “We have reason to fear
that the multitude of books which grows every day in a prodigious fashion will
make the following centuries fall into a state as barbarous as that of the
centuries that followed the fall of the Roman Empire.”21 And centuries later,
in the wake of the typewriter, the annual report of the Secretary of the
Smithsonian Institution in Washington, DC, drew attention to the
infrastructural problem of organizing the information that was now made
available through the typewriter, noting that “about twenty thousand volumes …
purporting to be additions to the sum of human knowledge, are published
annually; and unless this mass be properly arranged, and the means furnished
by which its contents may be ascertained, literature and science will be
overwhelmed by their own unwieldy bulk.”22 The experience of feeling
overwhelmed by information and lacking the right tools to handle it is no
joke. Indeed, a number of German librarians actually went documentably insane
between 1803 and 1825 in the wake of the information glut that followed the
secularization of ecclesiastical libraries.23 The desire for grand collections
has thus always also been followed by an accompanying anxiety relating to
questions of infrastructure.

As the history of collecting pathologies shows, reducing mass digitization
projects to rational and technical information projects would deprive them of
their rich psychological dimensions. Instead of discounting these pathologies,
we should acknowledge them, and examine not only their nature, but also their
implications for the organization of mass digitization projects. As the
following section shows, the pathologies not only exist as psychological
forces, but also as infrastructural imaginaries that directly impact theories
on how best to organize information in mass digitization. If the scale of mass
digitization projects is potentially limitless, how should they be organized?
And how will we feel when moving about in their gargantuan archives?

## The Ambivalent flaneur

In an article on cultures of archiving, sociologist Mike Featherstone asked
whether “the expansion of culture available at our fingertips” could be
“subjected to a meaningful ordering,” or whether the very “desire to remedy
fragmentation” should be “seen as clinging to a form of humanism with its
emphasis upon cultivation of the persona and unity which are now regarded as
merely nostalgic.”24 Featherstone raised the question in response to the
popularization of the Internet at the turn of the millennium. Yet, as the
previous section has shown, his question is probably as old as the collecting
practices themselves. Such questions have become no less significant with mass
digitization. How are organizational practices conceived of as meaningful
today? As we shall see, this question not only relates to technical
characteristics but is also informed by a strong spatial imaginary that often
takes the shape of labyrinthine infrastructures and often orients itself
toward the figure of the user. Indeed, the role of the organizer of knowledge,
and therefore the accompanying responsibility of making sense of collections,
has been conferred from knowledge professionals to individuals.

Today, as seen in all the examples of mass digitization we have explored in
the previous chapters, cultural memory institutions face a different paradigm
than that of the eighteenth- and nineteenth-century disciplining cultural
memory institution. In an age that encourages individualism, democratic
ideals, and cultural participation, the orientations of the cultural memory
institutions have shifted in discourse, practice, or both, toward an emphasis
on the importance of the subjective experience and active participation of the
individual visitor. As part of this shift, and as a result of the increasing
integration of the digital imaginary and production apparatus into the field
of cultural memory, the visitor has thus metamorphosed from a disciplinary
subject to a prosumer, produser, participant, and/or user.

The organizational shift in the cultural memory ecosystem means that
visionaries and builders of mass digitization infrastructures now pay
attention not only to how collections may reflect upon the institution that
holds the collection, but also on how the user experiences the informational
navigation of collections. This is not to say that making an impression, or
even disciplining the user, is not a concern for many mass digitization
projects. Mass digitizations’ constant public claims to literal greatness
through numbers evidence this. Yet, today’s projects also have to contend with
the opinion of the public and must make their projects palatable and
consumable rather than elitist and intimidating. The concern of the builders
of mass digitization infrastructure is therefore not only to create an
internal logic to their collections, but also to maximize the user’s
experience of being offered a wealth of information, while mitigating the
danger of giving the visitor a sense of losing oneself, or even drowning, in
information. An important question for builders of mass digitization projects
has therefore been how to build visual and semantic infrastructures that offer
the user a sense of meaningful direction as well as a desire to keep browsing.

While digital collections are in principle no longer tethered to their
physical origins in spatial terms, we still encounter ideas about them in
spatialized terms, often using notions such as trails, paths, and alleyways to
visualize the spaces of digital collections.25 This form of spatialized logic
did not emerge with the mass digitization of cultural heritage collections,
however, but also resides at the heart of some of the most influential early
digital theories on the digital realm.26 These theorized and conceptualized
the web as a new form of architectural infrastructure, not only in material
terms (such as cables and servers) but also as a new experiential space.27 And
in this spatialized logic, the figure of the flaneur became a central
character. Thus, we saw in the 1990s the rise of a digital interpretation of
the flaneur, originally an emblematic figure of modern urban culture at the
turn of the twentieth century, in the form of the virtual flaneur or the
cyberflaneur. In 1994, German net artists Heiko Idensen and Matthias Krohn
paid homage to the urban figure, noting in a text that “the screen winks at
the flaneur” and locating the central tenets of computer culture with the
“intoxication of the flânerie. Screens as streets and homes … of the crowd?”28
Later, artist Steven Goldate provided a simple equation between online and
offline spaces, noting among other things that “What the city and the street
was to the flaneur, the Internet and the Superhighway have become to the
Cyberflaneur.”29

Scholars, too, explored the potentials and limits of thinking about the user
of the Internet in flaneurian terms. Thus, Mike Featherstone drew parallels
between the nineteenth-century flaneur and the virtual flaneur, exploring the
similarities and differences between navigational strategies, affects, and
agencies in the early urban metropolis and the emergent digital realm of the
1990s.30

Although the discourse on the digital flaneur was most prevalent in the 1990s,
it still lingers on in contemporary writings about digitized cultural heritage
collections and their design. A much-cited article by computer scientists
Marian Dörk, Sheelagh Carpendale, and Carey Williamson, for instance, notes
the striking similarity between the “growing cities of the 19th century and
today’s information spaces” and the relationship between “the individual and
the whole.”31 Dörk, Carpendale, and Williamson use the figure of the flaneur
to emphasize the importance of supporting not only utilitarian information
needs through grand systems but also leisurely information surfing behaviors
on an individual level. Dörk, Carpendale, and Willliamson’s reflections relate
to the experience of moving about in a mass of information and ways of making
sense of this information. What does it mean to make sense of mass
digitization? How can we say or know that the past two hours we spent
rummaging about in the archives of Google Books, digging deeper in Europeana,
or following hyperlinks in Monoskop made sense, and by whose standards? And
what are the cultural implications of using the flaneur as a cultural
reference point for these ideals? We find few answers to these questions in
Dörk, Carpendale, and Williamson’s article, or in related articles that invoke
the flaneur as a figure of inspiration for new search strategies. Thus, the
figure of the flaneur is predominantly used to express the pleasurable and
productive aspect of archival navigation. But in its emphasis on pleasure and
leisure, the figure neglects the much more ambivalent atmosphere that
enshrouds the flaneur as he navigates the modern metropolis. Nor does it
problematize the privileged viewpoint of the flaneur.

The character of the flaneur, both in its original instantiations in French
literature and in Walter Benjamin’s early twentieth-century writings, was
certainly driven by pleasure; yet, on a more fundamental level, his existence
was also, as Elizabeth Wilson points out in her feminist reading of the
flaneur, “a sorrowful engagement with the melancholy of cities,” which arose
“partly from the enormous, unfulfilled promise of the urban spectacle, the
consumption, the lure of pleasure and joy which somehow seem destined to be
disappointed.”32 Far from an optimistic and unproblematic engagement with
information, then, the figure of the flaneur also evokes deeper anxieties
arising from commodification processes and the accompanying melancholic
realization that no matter how much one strolls and scrolls, nothing one
encounters can ever fully satisfy one’s desires. Benjamin even strikingly
spatializes (and sexualizes) this mental state in an infrastructural
imaginary: the labyrinth. The labyrinth is thus, Benjamin suggests, “the home
of the hesitant. The path of someone shy of arrival at a goal easily takes the
form of a labyrinth. This is the way of the (sexual) drive in those episodes
which precede its satisfaction.”33

Benjamin’s hesitant flaneur caught in an unending maze of desire stands in
contrast to the uncomplicated flaneur invoked in celebratory theories on the
digital flaneur. Yet, recent literature on the design of digital realms
suggests that the hesitant man caught in a drive for more information is a
much more accurate image of the digital flaneur than the man-in-the-know.34
Perhaps, then, the allegorical figure of the flaneur in digital design should
be used less to address pleasurable wandering and more to invoke “the most
characteristic response of all to the wholly new forms of life that seemed to
be developing: ambivalence.”35 Caught up in the commodified labyrinth of the
modern digitized archive, the digital flaneur of mass digitization might just
as easily get stuck in a repetitive, monotonous routine of scrolling and
downloading new things, forever suspended in a state of unfulfilled desire,
than move about in meaningful and pleasurable ways.36

Moreover, and just as importantly, the figure of the flaneur is also entangled
in a cultural matrix of assumptions about gender, capabilities, and colonial
implications. In short: the flaneur is a white, able-bodied male. As feminist
theory attests to, the concept of the flaneur is male by definition. Some
feminists such as Griselda Pollock and Janet Wolff have denied the possibility
of a female variant altogether, because of women’s status as (often absent)
objects rather than subjects in the nineteenth-century urban environment.37
Others, such as Elizabeth Wilson, Deborah Epstein Nord, and Mica Nava have
complicated the issue by alluding the opportunities and limitations of
thinking about a female variant of the flaneur, for instance a flâneuse.38
These discussions have also reverberated in the digital sphere in new
variations.39 Whatever position one assumes, it is clear that the concept of
the flaneur, even in its female variant, is a complicated figure that has
problematic allusions to a universal privileged figure.

In similar terms, the flaneur also has problematic colonial and racial
connotations. As James Smalls points out in his essay “'Race As Spectacle in
Late-Nineteenth-Century French Art and Popular Culture,” the racial dimension
of the flaneur is “conspicuously absent” from most critical engagements with
the concept.40 Yet, as Smalls notes, the question of race is crucial, since
“the black man … is not privileged to lose himself in the Parisian crowd, for
he is constantly reminded of his epidermalized existence, reflected back at
him not only by what he sees, but by what we see as the assumed ‘normal’
white, universal spectator.”41 This othering is, moreover, not limited to the
historical scene of nineteenth-century Paris, but still remains relevant
today. Thus, as Garnette Cadogan notes in his essay “Walking While Black,”
non-white people are offered none of the freedoms of blending into the crowd
that Baudelaire’s and Benjamin’s flaneurs enjoyed. “Walking while black
restricts the experience of walking, renders inaccessible the classic Romantic
experience of walking alone. It forces me to be in constant relationship with
others, unable to join the New York flaneurs I had read about and hoped to
join.”42

Lastly, the classic figure of the flaneur also assumes a body with no
disabilities. As Marian Ryan notes in an essay in the _New York Times_ , “The
art of flânerie entails blending into the crowd. The disabled flaneur can’t
achieve that kind of invisibility.”43 What might we take from these critical
interventions into the uncomplicated discourse of the flaneur? Importantly,
they counterbalance the dominant seductive image of the empowered user, and
remind us of the colonial male gaze inherent in any invocation of the metaphor
of the flaneur, which for the majority of users is a subject position that is
simply not available (nor perhaps desirable).

The limitations of the figure of the flaneur raise questions not only about
the metaphor itself, but also about the topography of knowledge production it
invokes. As already noted, Walter Benjamin placed the flaneur within a larger
labyrinthine topology of knowledge production, where the flaneur could read
the spectacle in front of him without being read himself. Walter Benjamin
himself put the flaneur to rest with an analysis of an Edgar Allen Poe story,
where he analyzed the demise of the flaneur in an increasingly capitalist
topography, noting in melancholy terms that, “The bazaar is the last hangout
of the flaneur. If in the beginning the street had become an interieur for
him, now this interieur turned into a street, and he roamed through the
labyrinth of merchandise as he had once roamed through the labyrinth of the
city. It is a magnificent touch in Poe’s story that it includes along with the
earliest description of the flaneur the figuration of his end.”44 In 2012,
Evgeny Morozov in similar terms declared the death of the cyberflaneur.
Linking the commodification of urban spaces in nineteenth-century Paris to the
commodification of the Internet, Morozov noted that “it’s no longer a place
for strolling—it’s a place for getting things done” and that “Everything that
makes cyberflânerie possible—solitude and individuality, anonymity and
opacity, mystery and ambivalence, curiosity and risk-taking—is under
assault.”45 These two death sentences, separated by a century, link the
environment of the flaneur to significant questions about the commodification
of space and its infrapolitical implications.

Exploring the implications of this topography, the following section suggests,
will help us understand the infrapolitics of the spatial imaginaries of mass
digitization, not only in relation to questions of globalization and late
sovereignty, but also to cultural imaginaries of knowledge infrastructures.
Indeed, these two dimensions are far from mutually exclusive, but rather
belong to the same overarching tale of the politics of mass digitization.
Thus, while the material spatial infrastructures of mass digitization projects
may help us appreciate certain important political dynamics of Europeana,
Google Books, and shadow libraries (such as their territorializing features or
copyright contestations in relation to knowledge production), only an
inclusion of the infrastructural imaginaries of knowledge production will help
us understand the complex politics of mass digitization as it metamorphoses
from analog buildings, shelves, and cabinets to the circulatory networks of
digital platforms.

## Labyrinthine Imaginaries: Infrastructural Perspectives of Power and
Knowledge Production

If the flaneur is a central early figure in the cultural imaginary of the
observer of cultural texts, the labyrinth has long served as a cultural
imaginary of the library, and, in larger terms, the spatialized
infrastructural conditions of knowledge and power. Thus, literature is rife
with works that draw on libraries and labyrinths to convey stories about
knowledge production and the power struggles hereof. Think only of the elderly
monk-librarian in Umberto Eco’s classic, _The Name of the Rose,_ who notes
that: “the library is a great labyrinth, sign of the labyrinth of the world.
You enter and you do not know whether you will come out” 46; or consider the
haunting images of being lost in Jose Luis Borges’s tales about labyrinthine
libraries.47 This section therefore turns to the infrastructural space of the
labyrinth, to show that this spatial imaginary, much like the flaneur, is
loaded with cultural ambivalence, and to explore the ways in which the
labyrinthine infrastructural imaginary emphasizes and crystallizes the
infrapolitical tension in mass digitization projects between power and
perspective, agency and environment, playful innovation and digital labor.

The labyrinth is a prevalent literary trope, found in authors from Ovid,
Virgil, and Dante to Dickens and Nietzsche, and it has been used particularly
in relation to issues of knowledge and agency, and in haunting and nightmarish
terms in modern literature.48 As the previous section indicates, the labyrinth
also provides a significant image for understanding our relationship to mass
digitization projects as sites of both knowledge production and experience.
Indeed, one shadow library is even named _Aleph_ , which refers to the ancient
Hebrew letter and likely also nods at Jose Luis Borges’s labyrinthine short
story, _Aleph,_ on infinite labyrinthine architectures. Yet, what kind of
infrastructure is a labyrinth, and how does it relate to the potentials and
perils of mass digitization?

In her rich historical study of labyrinths, Penelope Doob argues that the
labyrinth possesses a dual potentiality: on the one hand, if experienced from
within, the labyrinth is a sign of confusion; on the other, when viewed from
above, it is a sign of complex order.49 As Harold Bloom notes, “all of us have
had the experience of admiring a structure when outside it, but becoming
unhappy within it.”50 Envisioning the labyrinth from within links to a
claustrophobic sense of ignorance, while also implying the possibility of
progress if you just turn the next corner. What better way to describe one’s
experience in the labyrinthine infrastructures of mass digitization projects
such as Google Books with its infrastructural conditions and contexts of
experience and agency? On the one hand, Google Books appears to provide the
view from above, lending itself as a logistical aid in its information-rich
environment. On the other hand, Google Books also produces an alienating
effect of impenetrability on two levels. First, although Google presents
itself as a compass, its seemingly infinite and constantly rearranging
universe nevertheless creates a sense of vertigo, only reinforced by the
almost existential question “Do you feel lucky?” Second, Google Books also
feels impenetrable on a deeper level, with its black-boxed governing and
ordering principles, hidden behind complex layers of code, corporate cultures,
and nondisclosure agreements.51 But even less-commercial mass digitization
projects such as, for instance, Europeana and Monoskop can produce a sense of
claustrophobia and alienation in the user. Think only of the frustration
encountered when reaching dead ends in the form of broken links or in lack of
access set down by European copyright regulations. Or even the alienation and
dissatisfaction that can well up when there are seemingly no other limits to
knowledge, such as in Monoskop, than one’s own cognitive shortcomings.

The figure of the labyrinth also serves as a reminder that informational
strolling is not only a leisurely experience, but also a laborious process.
Penelope Doob thus points out the common medieval spelling of labyrinth as
_laborintus_ , which foregrounds the concept of labor and “difficult process,”
whether frustrating, useful, or both.52 In an age in which “labor itself is
now play, just as play becomes more and more laborious,”53 Doob’s etymological
excursion serves to highlight the fact that in many mass digitization projects
it is indeed the user’s leisurely information scrolling that in the end
generates profit, cultural value, and budgetary justification for mass
digitization platforms. Jose van Dijck’s analysis of the valuation of traffic
in a digital environment is a timely reminder of how traffic is valued in a
cultural memory environment that increasingly orients itself toward social
media, “Even though communicative traffic on social media platforms seems
determined by social values such as popularity, attention, and connectivity,
they are impalpably translated into monetary values and redressed in business
models made possible by digital technology.”54 This is visible, for instance,
in Europeana’s usage statistic reports, which links the notions of _traffic_
and _performance_ together in an ontological equation (in this equation poor
performance inevitably means a mark of death). 55 In a blogpost marking the
launch of the _Europeana Statistics Dashboard_ , we are told that information
about mass digitization traffic is “vital information for a modern cultural
institution for both reporting and planning purposes and for public
accountability.”56 Thus, although visitors may feel solitary in their digital
wanderings, their digital footsteps are in fact obsessively traced and tracked
by mass digitization platforms and often also by numerous third parties.

Today, then, the user is indeed at work as she makes her way in the
labyrinthine infrastructures of mass digitization by scrolling, clicking,
downloading, connecting, and clearing and creating new paths. And while
“search” has become a keyword in digital knowledge environments, digital
infrastructures in mass digitization projects in fact distract as much as they
orient. This new economy of cultural memory begs the question: if mass
digitization projects, as labyrinthine infrastructures, invariably disorient
the wanderer as much as they aid her, how might we understand their
infrapolitics? After all, as the previous chapters have shown, mass
digitization projects often present a wide array of motivations for why
digitization should happen on a massive scale, with knowledge production and
cultural enlightenment usually featuring as the strongest arguments. But as
the spatialized heuristics of the flaneur and the labyrinth show, knowledge
production and navigation is anything but a simple concept. Rather, the
political dimensions of mass digitization discussed in previous chapters—such
as standardization, late sovereignty, and network power—are tied up with the
spatial imaginaries of what knowledge production and cultural memory are and
how they should and could be organized and navigated.

The question of the spatial imaginaries of knowledge production and
imagination has a long philosophic history. As historian David Bates notes,
knowledge in the Enlightenment era was often imagined as a labyrinthine
journey. A classic illustration of how this journey was imagined is provided
by Enlightenment philosopher Jean-Louis Castilhon, whose frustration is
palpable in this exclamation: “How cruel and painful is the situation of a
Traveller who has imprudently wandered into a forest where he knows neither
the winding paths, nor the detours, nor the exits!”57 These Enlightenment
journeys were premised upon an infrastructural framework that linked error and
knowledge, but also upon an experience of knowledge quests riddled by loss of
oversight and lack of a compass. As the previous sections show, the labyrinth
as a form of knowledge production in relation to truth and error persists as
an infrastructural trope in the digital. Yet, it has also metamorphosed
significantly since Castilhon. The labyrinthine infrastructural imaginaries we
find in digital environments thus differ significantly from more classical
images, not least under the influence of the rhizomatic metaphors of
labyrinths developed by Deleuze and Guattari and Eco. If the labyrinth of the
Renaissance had an endpoint and a truth, these new labyrinthine
infrastructures, as Kristin Veel points out, had a much more complex
relationship to the spatial organization of the truth. Eco and Deleuze and
Guattari thus conceived of their labyrinths as networks “in which all points
can be connected with one another” with “no center” but “an almost unlimited
multiplicity of alternative paths,” which makes it “impossible to rise above
the structure and observe it from the outside, because it transcends the
graphic two-dimensionality of the two earlier forms of labyrinths.”58 Deleuze
expressed the senselessness of these contemporary labyrinths as a “theater
where nothing is fixed, a labyrinth without a thread (Ariadne has hung
herself).”59

In mass digitization, this new infrastructural imaginary feeds a looming
concern over how best to curate and infrastructurate cultural collections. It
is this concern that we see at play in the aforementioned institutional
concerns over how to best create meaningful paths in the cultural collections.
The main question that resounds is: where should the paths lead if there is no
longer one truth, that is, if the labyrinth has no center? Some mass
digitization projects seem to revel in this new reality. As we have seen,
shadow libraries such as Monoskop and UbuWeb use the affordances of the
digital to create new cultural connections outside of the formal hierarchies
of cultural memory institutions. Yet, while embraced by some, predictably the
new distribution of authority generates anxiety in the cultural memory circles
that had hitherto been able to hold claim to knowledge organization expertise.
This is the dizzying perspective that haunts the cultural memory professionals
faced with Europeana’s data governance model. Thus, as one Europeana
professional explained to me in 2010, “Europeana aims at an open-linked-data
model with a number of implications. One implication is that there will be no
control of data usage, which makes it possible, for instance, to link classics
with porn. Libraries do not agree to this loss of control which was at the
base of their self-understanding.”60 The Europeana professional then proceeded
to recount the profound anxiety experienced and expressed by knowledge
professionals as they increasingly came face-to-face with a curatorial reality
that is radically changing what counts as knowledge and context, where a
search for Courbet could, in theory, not only lead the user to other French
masters of painting but also to a copy of a porn magazine (provided it is out
of copyright). The anxiety experienced by knowledge professionals in the new
cultural memory ecosystem can of course be explained by a rationalized fear of
job insecurity and territorial concerns. Yet, the fear of knowledge
infrastructures without a center may also run deeper. As Penelope Doob reminds
us, the center of the labyrinth historically played a central moral and
epistemological role in the labyrinthine topos, as the site that held the
epiphanous key to unravel whatever evils or secrets the labyrinth contained.
With no center, there is no key, no epiphany.61 From this perspective, then,
it is not only a job that is lost. It is also the meaning of knowledge
itself.62

What, then, can we take from these labyrinthine wanderings as we pursue a
greater understanding of the infrapolitics of mass digitization? Certainly, as
this section shows, the politics of mass digitization is entangled in
spatialized imaginaries that have a long and complex cultural and affective
trajectory interlinked with ontological and epistemological questions about
the very nature of knowledge. Cladding the walls of these trajectories are, of
course, the ever-present political questions of authority and territory, but
also deeper cultural and affective questions about the nature and meaning of
knowledge as it bandies about in our cultural imaginaries, between discoveries
and dead-ends, between freedom and control.

As the next section will show, one concept has in particular come to
encapsulate these concerns: the notion of serendipity. While the notion of
serendipity has a long history, it has gained new relevance with mass
digitization, where it is used to express the realm of possibilities opened up
by the new digital infrastructures of knowledge production. As such, it has
come to play a role, not only as a playful cultural imaginary, but also as an
architectural ideal in software developments for mass digitization. In the
following section, we will look at a few examples of these architectures, as
well as the knowledge politics they are entangled in.

## The Architecture of Serendipitous Platforms

Serendipity has for long been a cherished word in archival studies, used to
describe a magical moment of “Eureka!” A fickle and fabulating concept, it
belongs to the world of discovery, capturing the moment when a meandering
soul, a flaneur, accidentally stumbles upon a valuable find. As such, the
moment of serendipity is almost always a happy circumstance of chance, and
never an unfortunate moment of risk. Serendipity also embodies the word in its
own origins. This section outlines the origins of this word and situate its
reemergence in theories on libraries and on digital realms of knowledge
production.

The English aristocrat Horace Walpole coined the word serendipity in a letter
to Horace Mann in 1754, in which he explained his fascination with a Persian
fairy tale about three princes from the _Isle of Serendip_ _63_ who possess
superpowers of observation. In his letter, Walpole linked the contents of the
fantastical story to his view of how new discoveries are made: “As their
highnesses travelled, they were always making discoveries, by “accidental
sagacity,” of things which they were not in quest of.” 64 And he proposed a
new word—“serendipity”—to describe this sublime talent for discovery.

Walpole’s conceptual invention did not immediately catch fire in common
parlance.65 But a few centuries after its invention, it suddenly took hold.
Who awakened the notion from its dormant state, and why? Sociologists Robert
K. Merton and Elinor Barber provided one influential answer in their own
enjoyable exploration of the word. As they note, serendipity had a particular
playful tone to it, expressing a sense that knowledge comes about not only
through sheer willpower and discipline, but also via pleasurable chance. This
almost hedonistic dimension made it incompatible with the serious ethos of the
nineteenth century. As Merton and Barber note, “The serious early Victorians
were not likely to pick up serendipity, except perhaps to point to it as a
piece of frivolous whimsy. … Although the Victorians, and especially Victorian
scientists, were familiar with the part played by accident in the process of
discovery, they were likely neither to highlight that factor nor to clothe the
phenomenon of accidental discovery in so lighthearted a word as
serendipity.”66 But in the 1940s and 1950s something happened—the word began
to catch on. Merton and Barber link this turn of linguistic events not only to
pure chance, but also a change in scientific networks and paradigms. Traveling
from the world of letters, as they recount, the word began making its way into
scientific circles, where attention was increasingly turned to “splashy
discoveries in lab and field.”67 But as Lorraine Daston notes, “discoveries,
especially those made by serendipity, depend partly on luck, and scientists
schooled in probability theory are loathe to ascribe personal merit to the
merely lucky,” and scientists therefore increasingly began to “domesticate
serendipity.”68 Daston remarks that while scientists schooled in probability
were reluctant to ascribe their discoveries to pure chance, the “historians
and literary scholars who struck serendipitous gold in the archives did not
seem so eager to make a science out of their good fortune.”69 One tale of how
literary and historical scholars struck serendipitous gold in the archive is
provided by Mike Featherstone:

> Once in the archive, finding the right material which can be made to speak
may itself be subject to a high degree of contingency—the process not of
deliberate rational searching, but serendipity. In this context it is
interesting to note the methods of innovatory historians such as Norbert Elias
and Michel Foucault, who used the British and French national libraries in
highly unorthodox ways by reading seemingly haphazardly “on the diagonal,”
across the whole range of arts and sciences, centuries and civilizations, so
that the unusual juxtapositions they arrived at summoned up new lines of
thought and possibilities to radically re-think and reclassify received
wisdom. Here we think of the flaneur who wanders the archival textual city in
a half-dreamlike state in order to be open to the half-formed possibilities of
the material and sensitive to unusual juxtapositions and novel perceptions.70

English scholar Nancy Schultz in similar terms notes that the archive “in the
humanities” represents a “prime site for serendipitous discovery.”71 In most
of these cases, serendipity is taken to mean some form of archival insight,
and often even a critical intellectual process. Deb Verhoeven, Associate Dean
of Engagement and Innovation at the University of Technology Sydney, reminds
us in relation to feminist archival work that “stories of accidental
discovery” can even take on dimensions of feminist solace, consoling “the
researcher, and us, with the idea that no system, whatever its claims to
discipline, comprehensiveness, and structure, is exempt from randomness, flux,
overflow, and therefore potential collapse.”72

But with mass digitization processes, their fusion of probability theories and
archives, and their ideals of combined fun and fact-finding, the questions
raised in the hard sciences about serendipity, its connotations of freedom and
chance, engineering and control, now also haunt the archives of historians and
literary scholars. Serendipity has now often come to be used as a motivating
factor for digitization in the first place, based on arguments that mass
digitized archives allow not only for dedicated and target-oriented research,
but also for new modes of search, of reading haphazardly “on the diagonal”
across genres and disciplines, as well as across institutional and national
borders that hitherto kept works and insights apart. As one spokesperson from
a prominent mass digitization company states, “digital collections have been
designed both to assist researchers in accessing original primary source
materials and to enable them to make serendipitous discoveries and unexpected
connections between sources.”73 And indeed, this sentiment reverberates in all
mass digitization projects from Europeana and Google Books to smaller shadow
libraries such as UbuWeb and Monoskop. Some scholars even argue that
serendipity takes on new forms due to digitization.74

It seems only natural, then, that mass digitization projects, and their
actors, have actively adopted the discourse of serendipity, both as a selling
point and a strategic claim. Talking about Google’s digitization program, Dr.
Sarah Thomas, Bodley’s Librarian and Director of Oxford University Library
Services, notes: “Library users have always loved browsing books for the
serendipitous discoveries they provide. Digital books offer a similar thrill,
but on multiple levels—deep entry into the texts or the ability to browse the
virtual shelf of books assembled from the world's great libraries.”75 But it
has also raised questions for those people who are in charge, not only of
holding serendipity forth as an ideal, but also building the architecture to
facilitate it. Dan Cohen, speaking on behalf of the DPLA, thus noted the
centrality of the concept, but also the challenges that mass digitization
raised in practical terms: “At DPLA, we’ve been thinking a lot about what’s
involved with serendipitous discovery. Since we started from scratch and
didn’t need to create a standard online library catalog experience, we were
free to experiment and provide novel ways into our collection of over five
million items. How to arrange a collection of that scale so that different
users can bump into items of unexpected interest to them?” While adopting the
language of serendipity is easy, its infrastructural construction is much
harder to envision. This challenge clearly troubles the strategic team
developing Europeana’s infrastructure, as it notes in a programmatic tone that
stands hilariously at odds with the curiosity it must cater to:

> Reviewing the personas developed for the D6.2 Requirements for Europeana.eu8
deliverable—and in particular those of the “culture vultures”—one finds two
somewhat-opposed requirements. On the one hand, they need to be able to find
what they are looking for, and navigate through clear and well-structured
data. On the other hand, they also come to Europeana looking for
“inspiration”—that is to say, for something new and unexpected that points
them towards possibilities they had previously been unaware of; what, in the
formal literature of user experience and search design, is sometimes referred
to as “serendipity search.” Europeana’s users need the platform to be
structured and predictable—but not entirely so.76

To achieve serendipity, mass digitization projects have often sought to take
advantage of the labyrinthine infrastructures of digitization, relying not
only on their own virtual bookshelves, but also on the algorithmic highways
and back alleys of social media. Twitter, in particular, before it adopted
personalization methods, became a preferred infrastructure for mass
digitization projects, who took advantage of Twitter’s lack of personalized
search to create whimsical bots that injected randomness into the user’s feed.
One example was the Digital Public Library of America’s DPLA Bot, which grabs
a random noun and uses its API to share the first result it finds. The DPLA
Bot aims to “infuse what we all love about libraries—serendipitous
discovery—into the DPLA” and thus seeks to provide a “kind of ‘Surprise me!’
search function for DPLA.”77 It did not take the programmer Peter Meyr much
time to develop a similar bot for Europeana. In an interview with
EuropeanaPro, Peter Meyr directly related the EuropeanaBot to the
serendipitous affordances of Twitter and its rewards for mass digitization
projects, noting that:

> The presentation of digital resources is difficult for libraries. It is no
longer possible to just explore, browse the stacks and make serendipitous
findings. With Europeana, you don't even have a physical library to go to. So
I was interested in bringing a little bit of serendipity back by using a
Twitter bot. … If I just wanted to present (semi)random Europeana findings, I
wouldn’t have needed Twitter—an RSS-Feed or a web page would be enough.
However, I wanted to infuse EuropeanaBot with a little bit of “Twitter
culture” and give it a personality.78

The British Library also developed a Twitter bot titled the Mechanical
Curator, which posts random resources with no customization except a special
focus on images in the library’s seventeenth- to nineteenth-century
collections.79 But there were also many projects that existed outside social
media platforms and operated across mass digitization projects. One example
was the “serendipity engine,” Serendip-o-matic, which first examined the
user’s research interests and then, based on this data, identified “related
content in locations such as the Digital Public Library of America (DPLA),
Europeana, and Flickr Commons.”80 While this initiative was not endorsed by
any of these mass digitization projects, they nevertheless featured it on
their blogs, integrating it into the mass digitization ecosystem.

Yet, while mass digitization for some represents the opportunity to amplify
the chance of chance, other scholars increasingly wonder whether the
engineering processes of mass digitization would take serendipity out of the
archive. Indeed, to them, the digital is antithetical to chance. One such
viewpoint is uttered by historian Tristram Hunt in an op-ed charging against
Google’s British digitization program under the title, “Online is fine, but
history is best hands on.” In it, Hunt argues that the digital, rather than
providing a new means of chance finding, would impede historical discovery and
that only the analog archival environment could foster real historical
discoveries, since it is “… only with MS in hand that the real meaning of the
text becomes apparent: its rhythms and cadences, the relationship of image to
word, the passion of the argument or cold logic of the case. Then there is the
serendipity, the scholar’s eternal hope that something will catch his eye,”81
In similar terms, Graeme Davison describes the lacking of serendipitous
errings in digital archives, as he likens digital search engines with driving
“a high-powered car down a freeway, compared with walking or cycling. It gets
us there more quickly but we skirt the towns and miss a lot of interesting
scenery on the way.”82 William McKeen also links the loss of serendipity to
the acceleration of method in the digital:

> Think about the library. Do people browse anymore? We have become such a
directed people. We can target what we want, thanks to the Internet. Put a
couple of key words into a search engine and you find—with an irritating hit
or miss here and there—exactly what you’re looking for. It’s efficient, but
dull. You miss the time-consuming but enriching act of looking through
shelves, of pulling down a book because the title interests you, or the
binding. Inside, the book might be a loser, a waste of the effort and calories
it took to remove it from its place and then return. Or it might be a dark
chest of wonders, a life-changing first step into another world, something to
lead your life down a path you didn't know was there.83

Common to all these statements is the sentiment that the engineering of
serendipity removes the very chance of serendipity. As Nicholas Carr notes,
“Once you create an engine—a machine—to produce serendipity, you destroy the
essence of serendipity. It becomes something expected rather than
unexpected.”84 It appears, then, that computational methods have introduced
historians and literary scholars to the same “beaverish efforts”85 to
domesticate serendipity as the hard sciences had to face at the beginning of
the twentieth century.

To my knowledge, few systematic studies exist about whether mass digitization
projects such as Europeana and Google Books hamper or foster creative and
original research in empirical terms. How one would go about such a study is
also an open question. The dichotomy between digital and analog does seem a
bit contrived, however. As Dan Cohen notes in a blogpost for DPLA, “bookstores
and libraries have their own forms of ‘serendipity engineering,’ from
storefront staff picks to behind-the-scenes cataloguing and shelving methods
that make for happy accidents.”86 Yet there is no doubt that the discourse of
serendipity has been infused with new life that sometimes veers toward a
“spectacle of serendipity.”87

Over the past decade, the digital infrastructures that organize our cultural
memory have become increasingly integrated in a digital economy that valuates
“experience” as a cultural currency that can be exchanged to profit, and our
affective meanderings as a form of industrial production. This digital economy
affects the architecture and infrastructure of digital archives. The archival
discourse on digital serendipity is thus now embroiled in a more deep-seated
infrapolitics of workspace architecture, influenced by Silicon Valley’s
obsession with networks, process, and connectivity.88 Think only of the
increasing importance of Google and Facebook to mass digitization projects:
most of these projects have a Facebook page on which they showcase their
material, just as they take pains to make themselves “algorithmically
recognizable”89 to Google and other search engines in the hope of reaching an
audience beyond the echo chamber of archives and to distribute their archival
material on leisurely tidbit platforms such as Pinterest and Twitter.90 If
serendipity is increasingly thought of as a platform problem, the final
question we might pose is what kind of infrapolitics this platform economy
generates and how it affects mass digitization projects.

## The Infrapolitics of Platform Power

As the previous sections show, mass digitization projects rely upon spatial
metaphors to convey ideas about, and ideals of, cultural memory
infrastructures, their knowledge production, and their serendipitous
potential. Thus, for mass digitization projects, the ideal scenario is that
the labyrinthine errings of the user result in serendipitous finds that in
turn bring about new forms of cultural value. From the point of the user,
however, being caught up in the labyrinth might just as easily give rise to an
experience of being confronted with a sense of lack of oversight and
alienation in the alleyways of commodified infrastructures. These two
scenarios co-exist because of what Penelope Doob (as noted in the section on
labyrinthine imaginaries) refers to as the dual potentiality of the labyrinth,
which when experienced from within can be become a sign of confusion, and when
viewed from above becomes a sign of complex order.91

In this final section, I will turn to a new spatial metaphor, which appears to
have resolved this dual potentiality of the spatial perspective of mass
digitization projects: the platform. The platform has recently emerged as a
new buzzword in the digital economy, connoting simultaneously a perspective, a
business strategy, and a political ideology. Ideally the platform provides a
different perspective than the labyrinth, offering the user the possibility of
simultaneously constructing the labyrinth and viewing it from above. This
final section therefore explores how we might understand the infrapolitics of
the platform, and its role in the digital economy.

In its recent business strategy, Europeana claimed that it was moving from
operating as a “portal” to operating as a “platform.”92 The announcement was
part of a broader infrastructural transition in the field of cultural memory,
undergirded by a process of opening up and connecting the cultural memory
sector to wider knowledge ecosystems.93 Indeed, Europeana’s move is part of a
much larger discursive and material reality of a more fundamental process of
“platformization” of the web.94 The notion of the platform has thus recently
become an important heuristic for understanding the cultural development of
the web and its economy, fusing the computational understanding of the
platform as an environment in which a code is executed95 and the political and
social understanding of a platform as a site of politics.96

While the infrapolitics of the platformization of the web has become a central
discussion in software and communication studies, little interest has been
paid to the implications of platforms for the politics of cultural memory.
Yet, Europeana’s business strategy illustrates the significant infrapolitical
role that platforms are given in mass digitization literature. Citing digital
historian Tim Sherratt’s claim that “portals are for visiting, platforms for
building on,”97 Europeana’s strategy argues that if cultural memory sites free
themselves and their content from the “prison of portals” in favor of more
openness and flexibility, this will in turn empower users to created their own
“pathways” through the digital cultural memory, instead of being forced to
follow predetermined “narrative journeys.”98 The business plan’s reliance on
Sherratt’s theory of platforms shows that although the platform has a
technical meaning in computation, Europeana’s discourse goes beyond mere
computational logic. It instead signifies an infrapolitics that carries with
it an assumption about the political dynamics of software, standing in for the
freedom to act in the labyrinthine infrastructures of digital collections.

Yet, what is a platform, and how might we understand its infrapolitics? As
Tarleton Gillespie points out, the oldest definition of platform is
architectural, as a level or near-level surface, often elevated.99 As such,
there is something inherently simple about platforms. As architect Sverre Fehn
notes, “the simplest form of architecture is to cultivate the surface of the
earth, to make a platform.”100 Fehn’s statement conceals a more fundamental
insight about platforms, however: in the establishment of a low horizontal
platform, one also establishes a social infrastructure. Platforms are thus not
only material constructions, they also harbor infrapolitical affordances. The
etymology of the notion of “platform” evidences this infrapolitical dimension.
Originally a spatial concept, the notion of platform appeared in
architectural, figurative, and military formations in the sixteenth century,
soon developing into specialized discourses of party programs and military and
building construction,101 religious congregation,102 and architectural vantage
points.103 Both the architectural and social understandings of the term
connote a process in which sites of common ground are created in
contradistinction to other sites. In geology, for instance, platforms emerge
from abrasive processes that elevate and distinguish one area in relation to
others. In religious and political discourse, platforms emerge as
organizational sites of belonging, often in contradistinction to other forms
of organization. Platforms, then, connote both common ground and demarcated
borders that emerge out of abrasive processes. In the nineteenth century, a
third meaning adjoined the notion of platforms, namely trade-related
cooperation. This introduced a dynamic to the word that is less informed by
abrasive processes and more by the capture processes of what we might call
“connective capitalism.” Yet, despite connectivity taking center stage, even
these platforms were described as territorializing constructs that favor some
organizations and corporations over others.104

In the twentieth and twenty-first centuries, as Gilles Deleuze and Felix
Guattari successfully urged scholars and architects to replace roots with
rhizomes, the notion of platform began taking on yet another meaning. Deleuze
and Guattari began fervently arguing for the nonexistence of rooted
platforms.105 Their vision soon gave rise to a nonfoundational understanding
of the world as a “limitless multiplicity of positions from which it is
possible only to erect provisional constructions.”106 Deleuze and Guattari’s
ontology became widely influential in theorizing the web _in toto_ ; as Rem
Koolhaas once noted, the “language of architecture—platform, blueprint,
structure—became almost the preferred language for indicating a lot of
phenomenon that we’re facing from Silicon Valley.”107 From the singular
platforms of military and party politics, emerged, then, the thousand
platforms of the digital, where “nearly every surge of research and investment
pursued by the digital industry—e-commerce, web services, online advertising,
mobile devices and digital media sales—has seen the term migrate to it.”108

What infrapolitical logic can we glean from Silicon Valley’s adoption of the
vernacular notion of the platform? Firstly, it is an infrapolitics of
temporality. As Tarleton Gillespie points out, the semantic aspects of
platforms “point to a common set of connotations: a ‘raised level surface’
designed to facilitate some activity that will subsequently take place. It is
anticipatory, but not causal.”109 The inscription of platforms into the
material infrastructures of the Internet thus assume a value-producing
futurity. If serendipity is what is craved, then platforms are the site in
which this is thought to take place.

Despite its inclusion in the entrepreneurial discourse of Silicon Valley, the
notion of the platform is also used to signal an infrapolitics of
collaboration, even subversion. Olga Gurionova, for instance, explores the
subversive dynamics of critical artistic platforms,110 and Trebor Sholtz
promotes the term “platform cooperativism” to advance worker-based
cooperatives that would “design their own apps-based platforms, fostering
truly peer-to-peer ways of providing services and things, and speak truth to
the new platform capitalists.”111 Shadow libraries such as Monoskop appear as
perfect examples of such subversive platforms and evidence of Srnicek’s
reminder that not _all_ social interactions are co-opted into systems of
profit generation. 112 Yet, as the territorial, legal, and social
infrastructures of mass digitization become increasingly labyrinthine, it
takes a lot of critical consciousness to properly interpret and understand its
infrapolitics. Engage with the shadow library Library Genesis on Facebook, for
instance, and you submit to platform capitalism.

A significant trait of platform-based corporations such as Google and Facebook
is that they more often than not present themselves as apolitical, neutral,
and empowering tools of connectivity, passive until picked up by the user.
Yet, as Lisa Nakamura notes, “reading’s economies, cultures of sharing, and
circuits of travel have never been passive.”113 One of digital platforms’ most
important infrapolitical traits is their dependence on network effects and a
winner-takes-all logic, where the platform owner is not only conferred
enormous power vis-à-vis other less successful platforms but also vis-à-vis
the platform user.114 Within this game, the platform owner determines the
rules of the product and the service on offer. Entering into the discourse of
platforms implies, then, not only constructing a software platform, but also
entering into a parasitical game of relational network effects, where
different platforms challenge and use each other to gain more views and
activity. This gives successful platforms a great advantage in the digital
economy. They not only gain access to data, but they also control the rules of
how the data is to be managed and governed. Therefore, when a user is surfing
Google Books, Google—and not the library—collects the user’s search queries,
including results that appeared in searches and pages the user visited from
the search. The browser, moreover, tracks the user’s activity, including pages
the user has visited and when, user data, and possibly user login details with
auto-fill features, user IP address, Internet service provider, device
hardware details, operating system and browser version, cookies, and cached
data from websites. The labyrinthine infrastructure of the mass digitization
ecosystem also means that if you access one platform through another, your
data will be collected in different ways. Thus, if you visit Europeana through
Facebook, it will be Facebook that collects your data, including name and
profile; biographical information such as birthday, hometown, work history,
and interests; username and unique identifier; subscriptions, location,
device, activity date, time and time-zone, activities; and likes, check-ins,
and events.115 As more platforms emerge from which one can access mass
digitized archives, such as social media sites like Facebook, Google+,
Pinterest, and Twitter, as well as mobile devices such as Android, gaining an
overview of who collects one’s data and how becomes more nebulous.

Europeana’s reminder illustrates the assemblatic infrastructural set-up of
mass digitization projects and how they operate with multiple entry points,
each of which may attach its own infrapolitical dynamics. It also illustrates
the labyrinthine infrastructures of privacy settings, over which a mapping is
increasingly difficult to attain because of constant changes and
reconfigurations. It furthermore illustrates the changing legal order from the
relatively stable sovereign order of human rights obligations to the
modulating landscape of privacy policies.

How then might we characterize the infrapolitics of the spatial imaginaries of
mass digitization? As this chapter has sought to convey, writings about mass
digitization projects are shot through with spatialized metaphors, from the
flaneur to the labyrinth and the platform, either in literal terms or in the
imaginaries they draw on. While this section has analyzed these imaginaries in
a somewhat chronological fashion, with the interactivity of the platform
increasingly replacing the more passive gaze of the spectator, they coexist in
that larger complex of spatial digital thinking. While often used to elicit
uncomplicated visions of empowerment, desire, curiosity, and productivity,
these infrapolitical imaginaries in fact show the complexity of mass
digitization projects in their reinscription of users and cultural memory
institutions in new constellations of power and politics.

## Notes

1. Kelly 1994, p. 263. 2. Connection Machines were developed by the
supercomputer manufacturer Thinking Machines, a concept that also appeared in
Jorge Luis Borges’s _The Total Library_. 3. Brewster Kahle, “Transforming Our
Libraries from Analog to Digital: A 2020 Vision,” _Educause Review_ , March
13, 2017, from-analog-to-digital-a-2020-vision>. 4. Ibid. 5. Couze Venn, “The
Collection,” _Theory, Culture & Society_ 23, no. 2–3 (2006), 36. 6. Hacking
2010. 7. Lefebvre 2009. 8. Blair and Stallybrass 2010, 139–163. 9. Ibid., 143.
10. Dewey 1926, 311. 11. See, for instance, Lorraine Daston’s wonderful
account of the different types of historical consciousness we find in archives
across the sciences: Daston 2012. 12. David Weinberger, “Library as Platform,”
_Library Journal_ , September 4, 2012, /future-of-libraries/by-david-weinberger/#_>. 13. Nakamura 2002, 89. 14.
Shannon Mattern,”Library as Infrastructure,” _Places Journal_ , June 2014,
. 15. Couze
Venn, “The Collection,” _Theory, Culture & Society_ 23, no. 2–3 (2006), 35–40.
16. Žižek 2009, 39. 17. Voltaire, “Une grande bibliothèque a cela de bon,
qu’elle effraye celui qui la regarde,” in _Dictionaire Philosophique_ , 1786,
265. 18. In his autobiography, Borges asserted that it “was meant as a
nightmare version or magnification” of the municipal library he worked in up
until 1946. Borges describes his time at this library as “nine years of solid
unhappiness,” both because of his co-workers and the “menial” and senseless
cataloging work he performed in the small library. Interestingly, then, Borges
translated his own experience of being informationally underwhelmed into a
tale of informational exhaustion and despair. See “An Autobiographical Essay”
in _The Aleph and Other Stories_ , 1978, 243. 19. Borges 2001, 216. 20. Yeo
2003, 32. 21. Cited in Blair 2003, 11. 22. Bawden and Robinson 2009, 186. 23.
Garrett 1999. 24. Featherstone 2000, 166. 25. Thus, for instance, one
Europeana-related project with the apt acronym PATHS, argues for the need to
“make use of current knowledge of personalization to develop a system for
navigating cultural heritage collections that is based around the metaphor of
paths and trails through them” (Hall et al. 2012). See also Walker 2006. 26.
Inspiring texts for (early) spatial thinking of the Internet, see: Hayles
1993; Nakamura 2002; Chun 2006. 27. Much has been written about whether or not
it makes sense to frame digital realms and infrastructures in spatial terms,
and Wendy Chun has written an excellent account of the stakes of these
arguments, adding her own insightful comments to them; see chapter 1, “Why
Cyberspace?” in Chun 2013. 28. Cited in Hartmann 2004, 123–124. 29. Goldate
1996. 30. Featherstone 1998. 31. Dörk, Carpendale, and Williamson 2011, 1216.
32. Wilson 1992, 108. 33. Benjamin. 1985a, 40. 34. See, for instance, Natasha
Dow Schüll’s fascinating study of the addictive design of computational
culture: Schüll 2014. For an industry perspective, see Nir Eyal, _Hooked: How
to Build Habit-Forming Products_ (Princeton, NJ: Princeton University Press,
2014). 35. Wilson 1992, 93. 36. Indeed, it would be interesting to explore the
link between Susan Buck Morss’s reinterpretation of Benjamin’s anesthetic
shock of phantasmagoria and today’s digital dopamine production, as described
by Natasha Dow Schüll in _Addicted by Design_ (2014); see Buck-Morss 2006. See
also Bjelić 2016. 37. Wolff 1985; Pollock 1998. 38. Wilson 1992; Nord 1995;
Nava and O’Shea 1996, 38–76. 39. Hartmann 1999. 40. Smalls 2003, 356. 41.
Ibid., 357. 42. Cadogan 2016. 43. Marian Ryan, “The Disabled flaneur,” _New
York Times_ , December 12, 2017, /the-disabled-flaneur.html>. 44. Benjamin. 1985b, 54. 45. Evgeny Morozov, “The
Death of the Cyberflaneur,” _New York Times_ , February 4, 2012. 46. Eco 2014,
169. 47. See also Koevoets 2013. 48. In colloquial English, “labyrinth” is
generally synonymous with “maze,” but some people observe a distinction, using
maze to refer to a complex branching (multicursal) puzzle with choices of path
and direction, and using labyrinth for a single, non-branching (unicursal)
path, which leads to a center. This book, however, uses the concept of the
labyrinth to describe all labyrinthine infrastructures. 49. Doob 1994. 50.
Bloom 2009, xvii. 51. Might this be the labyrinthine logic detected by
Foucault, which unfolds only “within a hidden landscape,” revealing “nothing
that can be seen” and partaking in the “order of the enigma”; see Foucault
2004, 98. 52. Doob 1994, 97. Doob also finds this perspective in the
fourteenth century in Chaucer’s _House of Fame_ , in which the labyrinth
“becomes an emblem of the limitations of knowledge in this world, where all we
can finally do is meditate on _labor intus_ ” (ibid., 313). Lady Mary Wroth’s
work _Pamphilia to Amphilanthus_ provides the same imagery, telling the story
of the female heroine, Pamphilia, who fails to escape a maze but nevertheless
engages her experience within it as a source of knowledge. 53. Galloway 2013a,
29. 54. van Dijck 2012. 55. “Usage Stats for Europeana Collections,”
_EuropeanaPro,_ usage-statistics>. 56. Joris Pekel, “The Europeana Statistics Dashboard is
here,” _EuropeanaPro_ , April 6, 2016, /introducing-the-europeana-statistics-dashboard>. 57. Bates 2002, 32. 58. Veel
2003, 154. 59. Deleuze 2013, 56. 60. Interview with professor of library and
information science working with Europeana, Berlin, Germany, 2011. 61. Borges
mused upon the possible horrendous implications of such a lack, recounting two
labyrinthine scenarios he once imagined: “In the first, a man is supposed to
be making his way through the dusty and stony corridors, and he hears a
distant bellowing in the night. And then he makes out footprints in the sand
and he knows that they belong to the Minotaur, that the minotaur is after him,
and, in a sense, he, too, is after the minotaur. The Minotaur, of course,
wants to devour him, and since his only aim in life is to go on wandering and
wandering, he also longs for the moment. In the second sonnet, I had a still
more gruesome idea—the idea that there was no minotaur—that the man would go
on endlessly wandering. That may have been suggested by a phrase in one of
Chesterton’s Father Brown books. Chesterton said, ‘What a man is really afraid
of is a maze without a center.’ I suppose he was thinking of a godless
universe, but I was thinking of the labyrinth without a minotaur. I mean, if
anything is terrible, it is terrible because it is meaningless.” Borges and
Dembo 1970, 319. 62. Borges actually found a certain pleasure in the lack of
order, however, noting that “I not only feel the terror … but also, well, the
pleasure you get, let’s say, from a chess puzzle or from a good detective
novel.” Ibid. 63. Serendib, also spelled Serendip (Arabic Sarandīb), was the
Persian/Arabic word for the island of Sri Lanka, recorded in use as early as
AD 361. 64. Letter to Horace Mann, 28 January 1754, in _Walpole’s
Correspondence_ , vol. 20, 407–411. 65. As Robert Merton and Elinor Barber
note, it first made it into the OED in 1912 (Merton and Barber 2004, 72). 66.
Merton and Barber 2004, 40. 67. Lorraine Daston, “Are You Having Fun Today?,”
_London Review of Books_ , September 23, 2004. 68. Ibid. 69. Ibid. 70.
Featherstone 2000, 594. 71. Nancy Lusignan Schulz, “Serendipity in the
Archive,” _Chronicle of Higher Education_ , May 15, 2011,
. 72.
Verhoeven 2016, 18. 73. Caley 2017, 248. 74. Bishop 2016 75. “Oxford-Google
Digitization Project Reaches Milestone,” Bodleian Library and Radcliffe
Camera, March 26, 2009.
. 76. Timothy
Hill, David Haskiya, Antoine Isaac, Hugo Manguinhas, and Valentine Charles
(eds.), _Europeana Search Strategy_ , May 23, 2016,
.
77. “DPLAbot,” _Digital Public Library of America_ , .
78. “Q&A with EuropeanaBot developer,” _EuropeanaPro_ , August 20, 2013,
. 79. There
are of course many other examples, some of which offer greater interactivity,
such as the TroveNewsBot, which feeds off of the National Library of
Australia’s 370 million resources, allowing the user to send the bot any text
to get the bot digging through the Trove API for a matching result. 80.
Serendip-o-matic, n.d. . 81. Tristram Hunt,
“Online Is Fine, but History Is Best Hands On,” _Guardian_ July 3, 2011,
library-google-history>. 82. Davison 2009. 83. William McKeen, “Serendipity,”
_New York Times,_ (n.d.),
. 84. Carr 2006.
We find this argument once again in Aleks Krotoski, who highlights the man-
machine dichotomy, noting that the “controlled binary mechanics” of the search
engine actually make serendipitous findings “more challenging to find” because
“branching pathways of possibility are too difficult to code and don’t scale”
(Aleks Krokoski, “Digital serendipity: be careful what you don't wish for,”
_Guardian_ , August 11, 2011,
profiling-aleks-krotoski>.) 85. Lorraine Daston, “Are You Having Fun Today?,”
_London Review of Books_ , September 23, 2004. 86. Dan Cohen, “Planning for
Serendipity,” _DPLA_ News and Blog, February 7, 2014,
. 87. Shannon
Mattern, “Sharing Is Tables,” _e-flux_ , October 17, 2017,
furniture-for-digital-labor/>. 88. Greg Lindsay, “Engineering Serendipity,”
_New York Times_ , April 5, 2013,
serendipity.html>. 89. Gillespie 2017. 90. See, for instance, Milena Popova,
“Facebook Awards History App that Will Use Europeana’s Collections,”
_EuropeanaPro_ , March 7, 2014, awards-history-app-that-will-use-europeanas-collections>. 91. Doob 1994. 92.
“Europeana Strategy Impact 2015–2020,”
.
93. Ping-Huang 2016, 53. 94. Helmond 2015. 95. Ian Bogost and Nick Montfort.
2009. “Platform studies: freduently asked questions.” _Proceeding of the
Digital Arts and Culture Conference_.
. 96. Srnicek 2017; Helmond 2015;
Gillespie 2010. 97. “While a portal can present its aggregated content in a
way that invites exploration, the experience is always constrained—pre-
determined by a set of design decisions about what is necessary, relevant and
useful. Platforms put those design decisions back into the hands of users.
Instead of a single interface, there are innumerable ways of interacting with
the data.” See Tim Sherratt, “From Portals to Platforms; Building New
Frameworks for User Engagement,” National Library of Australia, November 5,
2013, platform>. 98. “Europeana Strategy Impact 2015–2020,”
.
99. Gillespie 2010, 349. 100. Fjeld and Fehn 2009, 108. 101. Gießmann 2015,
126. 102. See, for example, C. S. Lewis’s writings on Calvinism in _English
Literature in the Sixteenth Century Excluding Drama_. Or how about
Presbyterian minster Lyman Beecher, who once noted in a sermon: “in organizing
any body, in philosophy, religion, or politics, you must _have_ a platform;
you must stand somewhere; on some solid ground.” Such a platform could gather
people, so that they could “settle on principles just as … bees settle in
swarms on the branches, fragrant with blossoms and flowers.” See Beecher 2012,
21. 103. “Platform, in architecture, is a row of beams which support the
timber-work of a roof, and lie on top of the wall, where the entablature ought
to be raised. This term is also used for a kind of terrace … from whence a
fair prospect may be taken of the adjacent country.” See Nicholson 1819. 104.
As evangelist Calvin Colton noted in his work on the US’s public economy, “We
find American capital and labor occupying a very different position from that
of the same things in Europe, and that the same treatment applied to both,
would not be beneficial to both. A system which is good for Great Britain may
be ruinous to the United States. … Great Britain is the only nation that is
prepared for Free Trade … on a platform of universal Free Trade, the advanced
position of Great Britain … in her skill, machinery, capital and means of
commerce, would make all the tributary to her; and on the same platform, this
distance between her and other nations … instead of diminishing, would be
forever increasing, till … she would become the focus of the wealth, grandeur,
and power of the world.” 105. Deleuze and Guattari 1987. 106. Solá-Morales
1999, 86. 107. Budds 2016. 108. Gillespie 2010, 351. 109. Gillespie 2010, 350.
Indeed, it might be worth resurrecting the otherwise-extinct notion of
“plotform” to reinscribe agency and planning into the word. See Tawa 2012.
110. As Olga Gurionova points out, platforms have historically played a
significant role in creative processes as a “set of shared resources that
might be material, organizational, or intentional that inscribe certain
practices and approaches in order to develop collaboration, production, and
the capacity to generate change.” Indeed, platforms form integral
infrastructures in the critical art world for alternative systems of
organization and circulation that could be mobilized to “disrupt
institutional, representational, and social powers.” See Olga Goriunova, _Art
Platforms and Cultural Production on the Internet_ (New York: Routledge,
2012), 8. 111. Trebor Scholz, “Platform Cooperativism vs. the Sharing
Economy,” _Medium_ , December 5, 2016, cooperativism-vs-the-sharing-economy-2ea737f1b5ad>. 112. Srnicek 2017, 28–29.
113. Nakamura 2013, 243. 114. John Zysman and Martin Kennedy, “The Next Phase
in the Digital Revolution: Platforms, Automation, Growth, and Employment,”
_ETLA Reports_ 61, October 17, 2016, /ETLA-Raportit-Reports-61.pdf>. 115. Europeana’s privacy page explicitly notes
this, reminding the user that, “this site may contain links to other websites
that are beyond our control. This privacy policy applies solely to the
information you provide while visiting this site. Other websites which you
link to may have privacy policies that are different from this Privacy
Policy.” See “Privacy and Terms,” _Europeana Collections_ ,
.

# 6
Concluding Remarks

I opened this book claiming that the notion of mass digitization has shifted
from a professional concept to a cultural political phenomenon. If the former
denotes a technical way of duplicating analog material in digital form, mass
digitization as a cultural practice is a much more complex apparatus. On the
one hand, it offers the simple promise of heightened public and private access
to—and better preservation of—the past; one the other, it raises significant
political questions about ethics, politics, power, and care in the digital
sphere. I locate the emergence of these questions within the infrastructures
of mass digitization and the ways in which they not only offer new ways of
reading, viewing, and structuring cultural material, but also new models of
value and its extraction, and new infrastructures of control. The political
dynamic of this restructuring, I suggest, may meaningfully be referred to as a
form of infrapolitics, insofar as the political work of mass digitization
often happens at the level of infrastructure, in the form of standardization,
dissent, or both. While mass digitization entwines the cultural politics of
analog artifacts and institutions with the infrapolitical logics of the new
digital economies and technologies, there is no clear-cut distinction between
between the analog and digital realms in this process. Rather, paraphrasing N.
Katherine Hayles, I suggest that mass digitization, like a Janus-figure,
“looks to past and future, simultaneously reinforcing and undermining both.”1

A persistent challenge in the study of mass digitization is the mutability of
the analytical object. The unstable nature of cultural memory archives is not
a new phenomenon. As Derrida points out, they have always been haunted by an
unintended instability, which he calls “archive fever.” Yet, mass digitization
appears to intensify this instability even further, both in its material and
cultural instantiations. Analog preservation practices that seek to stabilize
objects are in the digital realm replaced with dynamic processes of content
migration and software updates. Cultural memory objects become embedded in
what Wendy Chun has referred to as the enduring ephemerality of the digital as
well as the bleeding edge of obsolescence.2

Indeed, from the moment when the seed for this book was first planted to the
time of its publication, the landscape of mass digitization, and the political
battles waged on its maps, has changed considerably. Google Books—which a
decade ago attracted the attention, admiration, and animosity of all—recently
metamorphosed from a giant flood to a quiet trickle. After a spectacle of
press releases on quantitative milestones, epic legal battles, and public
criticisms, Google apparently lost interest in Google Books. Google’s gradual
abandonment of the project resembled more an act of prolonged public ghosting
than a clear-cut break-up, leaving the public to read in between the lines
about where the company was headed: scanning activities dwindled; the Google
Books blog closed along with its Twitter feed; press releases dried up; staff
was laid off; and while scanning activities are still ongoing, they are
limited to works in the public domain, changing the scale considerably.3 One
commentator diagnosed the change of strategy as the demise of “the greatest
humanistic project of our time.”4 Others acknowledged in less dramatic terms
that while Google’s scanning activities may have stopped, its legacy lives on
and is still put to active use.5

In the present context, the important point to make is that a quiet life does
not necessarily equal death. Indeed, this is the lesson we learn from
attending to the subtle workings of infrastructure: the politics of
infrastructure is the politics of what goes on behind the curtains, not only
what is launched to the front page. Thus, as one engineer notes when
confronted with the fate of Google Books, “We’re not focused on shiny features
and things that are very visible to users. … It’s more like behind-the-scenes
work and perfecting the technology—acquiring content, processing it properly
so that we can view the entire book online, and adjusting the search
algorithm.”6 This is a timely reminder that any analysis of the infrapolitics
of mass digitization has to tend not only to the visible and loud politics of
construction, but also the quiet and ongoing politics of infrastructure
maintenance. It makes no sense to write an obituary for Google Books if the
infrastructure is still at work. Moreover, the assemblatic nature of mass
digitization also demands that we do not stop at the immediate borders of a
project when making analytical claims about their infrapolitics. Thus, while
Google Books may have stopped in its tracks, other trains of mass digitization
have pulled up instead, carrying the project of mass digitization forward
toward new, divergent, and experimental sites. Google’s different engagements
with cultural digitization shows that an analysis of the politics of Google’s
memory work needs to operate with an assemblatic method, rather than a
delineating approach.7 Europeana and DPLA also are mutable analytical objects,
both in economic and cultural form. Therefore, Europeana leads a precarious
life from one EU budget framework to the next, and its cultural identity and
software instantiations have transformed from a digital library, to a portal,
to a platform over the course of only a few decades. Last, but not least,
shadow libraries are mediating and multiplying cultural memory objects from
servers and mirror links that sometimes die just as quickly as they emerged.
The question of institutionalization matters greatly in this respect,
outlining what we might call a spectrum of contingency. If a mass digitization
project lives in the margins of institutions, such as in the case of many
shadow libraries, its infrastructure is often fraught with uncertainties. Less
precarious, but nonetheless tumultuous, are the corporate institutions with
their increasingly short market-driven lifespans. And, at the other end of the
spectrum, we find mass digitization projects embedded in bureaucratic
apparatuses whose lumbering budget processes provide publically funded mass
digitization projects with more stable infrastructures.

The temporal dimension of mass digitization projects also raises important
questions about the horizon of cultural memory in material terms. Should mass
digitization, one might ask, also mean whither analog cultural memory? This
question seems relevant not least in cases where institutions consider
digitization as a form of preservation that allows them to discard analog
artifacts once digitized. In digital form, we further have to contend with a
new temporal horizon of cultural memory itself, based not on only on
remembrance but on anticipation in the manner of “If you liked this, you might
also like. ….” Thus, while cultural memory objects link to objects of the
past, mass digitized cultural memory also gives rise to new methods of
prediction and preemption, for instance in the form of personalization. In
this anticipatory regime, cultural memory becomes subject to perpetual
calculatory activities, processing affects, and activities in terms of
likelihoods and probabilistic outcomes.

Thus, cultural memory has today become embedded in new glocalized
infrastructures. On the one hand, these infrastructures present novel
opportunities. Cultural optimists have suggested that mass digitization has
the potential to give rise to new cosmopolitan public spheres tethered from
the straitjackets of national territorializing forces. On the other hand,
critics argue that there is little evidence that cosmopolitan dynamics are in
fact at work. Instead, new colonial and neoliberal platforms arise from a
complex infrastructural apparatus of private and public institutions and
become shaped by political, financial, and social struggles over
representation, control, and ownership of knowledge.

In summary, it is obvious that the scale of mass digitization, public and
private, licit and illicit, has transformed how we engage with texts, cultural
works, and cultural memory. People today have instant access to a wealth of
works that would previously have required large amounts of money, as well as
effort, to engage with. Most of us enjoy the new cultural freedoms we have
been given to roam the archives, collecting and exploring oddities along the
way, and making new connections between works that would previously have been
held separate by taxonomy, geography, and time in the labyrinthine material
and social infrastructures of cultural memory.

A special attraction of mass digitization no doubt lies in its unfathomable
scale and linked nature, and the fantasy and “spectacle of collecting.”8 The
new cultural environment allows the user to accelerate the pace of information
by accessing key works instantly as well as idly rambling in the exotic back
alleys of digitized culture. Mass digitized archives can be explored to
functional, hedonistic, and critical ends (sometimes all at the same time),
and can be used to exhume forgotten works, forgotten authors, and forgotten
topics. Within this paradigm, the user takes center stage—at least
discursively. Suddenly, a link made between a porn magazine and a Courbet
painting could well be a valued cultural connection instead of a frowned-upon
transgression in the halls of high culture. Users do not just download books;
they also upload new folksonomies, “ego-documents,” and new cultural
constellations, which are all welcomed in the name of “citizen science.”
Digitization also infuses texts with new life due to its new connective
properties that allow readers and writers to intimately and
exhibitionistically interact around cultural works, and it provides new ways
of engaging with texts as digital reading migrates toward service-based rather
than hardware-based models of consumption. Digitization allows users to
digitally collect works themselves and indulge in alluring archival riches in
new ways.

But mass digitization also gives rise to a range of new ethical, political,
aesthetic, and methodological questions concerning the spatio-temporality,
ownership, territoriality, re-use, and dissemination of cultural memory
artifacts. Some of those dimensions have been discussed in detail in the
present work and include questions about digital labor, platformization,
management of visibility, ownership, copyright, and other new forms of control
and de- and recentralization and privatization processes. Others have only
been alluded to but continue to gain in relevance as processes of mass
digitization excavate and make public sensitive and contested archival
material. Thus, as the cultural memories and artifacts of indigenous
populations, colonized territories and other marginalized groups are brought
online, as well as artifacts that attest to the violent regimes of colonialism
and patriarchy, an attendant need has emerged for an ethics of care that goes
beyond simplistic calls for right to access, to instead attend to the
sensitivity of the digitized material and the ways in which we encounter these
materials.

Combined, these issues show that mass digitization is far from a
straightforward technical affair. Rather, the productive dimensions of mass
digitization emerge from the rubble of disruptive and turbulent political
processes that violently dislocate established frontiers and power dynamics
and give rise to new ones that are yet to be interpreted. Within these
turbulent processes, the familiar narratives of empowered users collecting and
connecting works and ideas in new and transgressive ways all too often leave
out the simultaneous and integrated story of how the labyrinthine
infrastructures of mass digitization also writes itself on the back of the
users, collecting them and their thoughts in the process, and subjecting them
to new economic logics and political regimes. As Lisa Nakamura reminds us, “by
availing ourselves of its networked virtual bookshelves to collect and display
our readerliness in a postprint age, we have become objects to be collected.”9
Thus, as we gather vintage images on Pinterest, collect books in Google Books,
and retweet sounds files from Europeana, we do best not only to question the
cultural logic and ethics of these actions but also to remember that as we
collect and connect, we are also ourselves collected and connected.

If the power of mass digitization happens at the level of infrastructure,
political resistance will have to take the form of infrastructural
intervention. We play a role in the formulation of the ethics of such
interventions, and as such we have to be willing to abandon the predominant
tropes of scale, access, and acceleration in favor of an infrapolitics of
care—a politics that offers opportunities for mindful, slow, and focused
encounters.

## Notes

1. Hayles 1999, 17. 2. Chun. 2008; Chun 2017. 3. Murrell 2017. 4. James
Somers, “Torching the Modern-Day Library of Alexandria,” _The Atlantic_ ,
April 20, 2017. 5. Jennifer Howard, “What Happened to Google’s Effort to Scan
Millions of University Library Books?,” _EdSurge_ , August 10, 2017,
scan-millions-of-university-library-books>. 6. Scott Rosenberg, “How Google
Books Got Lost,” _Wired_ , November 4, 2017, /how-google-book-search-got-lost>. 7. What to make, for instance, of the new
trend of employing Google’s neural networks to find one’s museum doppelgänger
from the company’s image database? Or the fact that Google Cultural Institute
is consistently turning out new cultural memory hacks such as its cardboard VR
glasses, its indoor mapping of museum spaces, and its gigapixel Art Camera
which reproduces artworks in uncanny detail. Or the expansion of their remit
from cultural memory institutions to also encompass natural history museums?
See, for example, Adrien Chen, “The Google Arts & Culture App and the Rise of
the ‘Coded Gaze,’” _New Yorker_ , January 26, 2018,
the-rise-of-the-coded-gaze-doppelganger>. 8. Nakamura 2013, 240. 9. Ibid.,
241.

#
References

1. Abbate, Janet. 2012. _Recoding Gender: Women’s Changing Participation in Computing_. Cambridge, MA: MIT Press.
2. Abrahamsen, Rita, and Michael C. Williams. 2011. _Security beyond the State: Private Security in International Politics_. Cambridge: Cambridge University Press.
3. Adler-Nissen, Rebecca, and Thomas Gammeltoft-Hansen. 2008. _Sovereignty Games: Instrumentalizing State Sovereignty in Europe and Beyond_. New York: Palgrave Macmillan.
4. Agre, Philip E. 2000. “The Market Logic of Information.” _Knowledge, Technology & Policy_ 13 (3): 67–77.
5. Aiden, Erez, and Jean-Baptiste Michel. 2013. _Uncharted: Big Data as a Lens on Human Culture_. New York: Riverhead Books.
6. Ambati, Vamshi, N. Balakrishnan, Raj Reddy, Lakshmi Pratha, and C. V. Jawahar. 2006. “The Digital Library of India Project: Process, Policies and Architecture.” _CiteSeer_. .
7. Amoore, Louise. 2013. _The Politics of Possibility: Risk and Security beyond Probability_. Durham, NC: Duke University Press.
8. Anderson, Ben, and Colin McFarlane. 2011. “Assemblage and Geography.” _Area_ 43 (2): 124–127.
9. Anderson, Benedict. 1991. _Imagined Communities: Reflections on the Origin and Spread of Nationalism_. London: Verso.
10. Arms, William Y. 2000. _Digital Libraries_. Cambridge, MA: MIT Press.
11. Arvanitakis, James, and Martin Fredriksson. 2014. _Piracy: Leakages from Modernity_. Sacramento, CA: Litwin Books.
12. Association of Research Libraries. 2009. “ARL Encourages Members to Refrain from Signing Nondisclosure or Confidentiality Clauses.” _ARL News_ , June 5.
13. Auletta, Ken. 2009. _Googled: The End of the World As We Know It_. New York: Penguin Press.
14. Baker, Nicholson. 2002. _The Double Fold: Libraries and the Assault on Paper_. London: Vintage Books.
15. Barthes, Roland. 1977. “From Work to Text” and “The Grain of the Voice.” In _Image Music Text_ , ed. Roland Barthes. London: Fontana Press.
16. Barthes, Roland. 1981. _Camera Lucida: Reflections on Photography_. New York: Hill and Wang.
17. Bates, David W. 2002. _Enlightenment Aberrations: Error and Revolution in France_. Ithaca, NY: Cornell University Press.
18. Batt, William H. 1984. “Infrastructure: Etymology and Import.” _Journal of Professional Issues in Engineering_ 110 (1): 1–6.
19. Bawden, David, and Lyn Robinson. 2009. “The Dark Side of Information: Overload, Anxiety and Other Paradoxes and Pathologies.” _Journal of Information Science_ 35 (2): 180–191.
20. Beck, Ulrick. 1996. “World Risk Society as Cosmopolitan Society? Ecological Questions in a Framework of Manufactured Uncertainties.” _Theory, Culture & Society_ 13 (4), 1–32.
21. Beecher, Lyman. 2012. _Faith Once Delivered to the Saints: A Sermon Delivered at Worcester, Mass., Oct. 15, 1823._ Farmington Hills, MI: Gale, Sabin Americana.
22. Belder, Lucky. 2015. “Cultural Heritage Institutions as Entrepreneurs.” In _Cultivate!: Cultural Heritage Institutions, Copyright & Cultural Diversity in the European Union & Indonesia_, eds. M. de Cock Buning, R. W. Bruin, and Lucky Belder, 157–196. Amsterdam: DeLex.
23. Benjamin, Walter. 1985a. “Central Park.” _New German Critique, NGC_ 34 (Winter): 32–58.
24. Benjamin, Walter. 1985b. “The flaneur.” In _Charles Baudelaire: a Lyric Poet in the Era of High Capitalism_. Translated by Harry Zohn. London: Verso.
25. Benjamin, Walter. 1999. _The Arcades Project_. Cambridge, MA: Harvard University Press.
26. Béquet, Gaëlle. 2009. _Digital Library as a Controversy: Gallica vs Google_. Proceedings of the 9th Conference Libraries in the Digital Age (Dubrovnik, Zadar, May 25–29, 2009). .
27. Berardi, Franco, Gary Genosko, and Nicholas Thoburn. 2011. _After the Future_. Edinburgh, UK: AK Press.
28. Berk, Hillary L. 2015. “The Legalization of Emotion: Managing Risk by Managing Feelings in Contracts for Surrogate Labor.” _Law & Society Review_ 49 (1): 143–177.
29. Bishop, Catherine. 2016. “The Serendipity of Connectivity: Piecing Together Women’s Lives in the Digital Archive.” _Women’s History Review_ 26 (5): 766–780.
30. Bivort, Olivier. 2013. “ _Le romantisme et la ‘langue de Voltaire_.’” Revue Italienne d’études Françaises, 3. DOI: 10.4000/rief.211.
31. Bjelić, Dušan I. 2016. _Intoxication, Modernity, and Colonialism: Freud’s Industrial Unconscious, Benjamin’s Hashish Mimesis_. New York: Palgrave Macmillan.
32. Blair, Ann, and Peter Stallybrass. 2010. “Mediating Information, 1450–1800”. In _This Is Enlightenment_ , eds. Clifford Siskin and William B. Warner. Chicago: University of Chicago Press.
33. Blair, Ann. 2003. “Reading Strategies for Coping with Information Overload ca. 1550–1700.” _Journal of the History of Ideas_ 64 (1): 11–28.
34. Bloom, Harold. 2009. _The Labyrinth_. New York: Bloom’s Literary Criticism.
35. Bodó, Balazs. 2015. “The Common Pathways of Samizdat and Piracy.” In _Samizdat: Between Practices and Representations_ , ed. V. Parisi. Budapest: CEU Institute for Advanced Study. Available at SSRN; .
36. Bodó, Balazs. 2016. “Libraries in the Post-Scarcity Era.” In _Copyrighting Creativity: Creative Values, Cultural Heritage Institutions and Systems of Intellectual Property_ , ed. Helle Porsdam. New York: Routledge.
37. Bogost, Ian, and Nick Montfort. 2009. “Platform Studies: Frequently Asked Questions.” _Proceeding of the Digital Arts and Culture Conference_. .
38. Borges, Jorge Luis. 1978. “An Autobiographical Essay.” In _The Aleph and Other Stories, 1933–1969: Together with Commentaries and an Autobiographical Essay_. New York: E. P. Dutton.
39. Borges, Jorge Luis. 2001. “The Total Library.” In _The Total Library: Non-fiction 1922–1986_. London: Penguin.
40. Borges, Jorge Luis, and L. S. Dembo. 1970. “An Interview with Jorge Luis Borges.” _Contemporary Literature_ 11 (3): 315–325.
41. Borghi, Maurizio. 2012. “Knowledge, Information and Values in the Age of Mass Digitisation.” In _Value: Sources and Readings on a Key Concept of the Globalized World_ , ed. Ivo de Gennaro. Leiden, the Netherlands: Brill.
42. Borghi, Maurizio, and Stavroula Karapapa. 2013. _Copyright and Mass Digitization: A Cross-Jurisdictional Perspective_. Oxford: Oxford University Press.
43. Borgman, Christine L. 2015. _Big Data, Little Data, No Data: Scholarship in the Networked World_. Cambridge, MA: MIT Press.
44. Bottando, Evelyn. 2012. _Hedging the Commons: Google Books, Libraries, and Open Access to Knowledge_. Iowa City: University of Iowa.
45. Bowker, Geoffrey C., Karen Baker, Florence Millerand, and David Ribes. 2010. “Toward Information Infrastructure Studies: Ways of Knowing in a Networked Environment.” In _The International Handbook of Internet Research_ , eds. Hunsinger Lisbeth Klastrup Jeremy and Matthew Allen. Dordrecht, the Netherlands: Springer.
46. Bowker, Geoffrey C, and Susan L. Star. 1999. _Sorting Things Out: Classification and Its Consequences_. Cambridge, MA: MIT Press.
47. Brin, Sergey. 2009. “A Library to Last Forever.” _New York Times_ , October 8.
48. Brin, Sergey, and Lawrence Page. 1998. “The Anatomy of a Large-Scale Hypertextual Web Search Engine.” _Computer Networks and ISDN Systems_ 30 (1–7): 107. .
49. Buckholtz, Alison. 2016. “New Ideas for Financing American Infrastructure: A Conversation with Henry Petroski.” _World Bank Group, Public-Private Partnerships Blog_ , March 29.
50. Buck-Morss, Susan. 2006. “The flaneur, the Sandwichman and the Whore: The Politics of Loitering.” _New German Critique_ (39): 99–140.
51. Budds, Diana. 2016. “Rem Koolhaas: ‘Architecture Has a Serious Problem Today.’” _CoDesign_ 21 (May). .
52. Burkart, Patrick. 2014. _Pirate Politics: The New Information Policy Contests_. Cambridge, MA: MIT Press.
53. Burton, James, and Daisy Tam. 2016. “Towards a Parasitic Ethics.” _Theory, Culture & Society_ 33 (4): 103–125.
54. Busch, Lawrence. 2011. _Standards: Recipes for Reality_. Cambridge, MA: MIT Press.
55. Caley, Seth. 2017. “Digitization for the Masses: Taking Users Beyond Simple Searching in Nineteenth-Century Collections Online.” _Journal of Victorian Culture : JVC_ 22 (2): 248–255.
56. Cadogan, Garnette. 2016. “Walking While Black.” Literary Hub. July 8. .
57. Callon, Michel, Madeleine Akrich, Sophie Dubuisson-Quellier, Catherine Grandclément, Antoine Hennion, Bruno Latour, Alexandre Mallard, et al. 2016. _Sociologie des agencements marchands: Textes choisis_. Paris: Presses des Mines.
58. Cameron, Fiona, and Sarah Kenderdine. 2007. _Theorizing Digital Cultural Heritage: A Critical Discourse_. Cambridge, MA: MIT Press.
59. Canepi, Kitti, Becky Ryder, Michelle Sitko, and Catherine Weng. 2013. _Managing Microforms in the Digital Age_. Association for Library Collections & Technical Services. .
60. Carey, Quinn Ann. 2015, “Maksim Moshkov and lib.ru: Russia’s Own ‘Gutenberg.’” _TeleRead: Bring the E-Books Home_. December 5. .
61. Carpentier, Nico. 2011. _Media and Participation: A Site of Ideological-Democratic Struggle_. Bristol, UK: Intellect.
62. Carr, Nicholas. 2006. “The Engine of Serendipity.” _Rough Type_ , May 18.
63. Cassirer, Ernst. 1944. _An Essay on Man: An Introduction to a Philosophy of Human Culture_. New Haven, CT: Yale University Press.
64. Castells, Manuel. 1996a. _The Rise of the Network Society_. Malden, MA: Blackwell Publishers.
65. Castells, Manuel. 1996b. _The Informational City: Information Technology, Economic Restructuring, and the Urban-Regional Process_. Cambridge: Blackwell.
66. Castells, Manuel, and Gustavo Cardoso. 2012. “Piracy Cultures: Editorial Introduction.” _International Journal of Communication_ 6 (1): 826–833.
67. Chabal, Emile. 2013. “The Rise of the Anglo-Saxon: French Perceptions of the Anglo-American World in the Long Twentieth Century.” _French Politics, Culture & Society_ 31 (1): 24–46.
68. Chartier, Roger. 2004. “Languages, Books, and Reading from the Printed Word to the Digital Text.” _Critical Inquiry_ 31 (1): 133–152.
69. Chen, Ching-chih. 2005. “Digital Libraries and Universal Access in the 21st Century: Realities and Potential for US-China Collaboration.” In _Proceedings of the 3rd China-US Library Conference, Shanghai, China, March 22–25_ , 138–167. Beijing: National Library of China.
70. Chrisafis, Angelique. 2008. “Dante to Dialects: EU’s Online Renaissance.” _Guardian_ , November 21. .
71. Chun, Wendy H. K. 2006. _Control and Freedom: Power and Paranoia in the Age of Fiber Optics_. Cambridge, MA: MIT Press.
72. Chun, Wendy Hui Kyong. 2008. “The Enduring Ephemeral, or the Future Is a Memory.” _Critical Inquiry_ 35 (1): 148–171.
73. Chun, Wendy H. K. 2017. _Updating to Remain the Same_. Cambridge, MA: MIT Press.
74. Clarke, Michael Tavel. 2009. _These Days of Large Things: The Culture of Size in America, 1865–1930_. Ann Arbor: University of Michigan Press.
75. Cohen, Jerome Bernard. 2006. _The Triumph of Numbers: How Counting Shaped Modern Life_. New York: W.W. Norton.
76. Conway, Paul. 2010. “Preservation in the Age of Google: Digitization, Digital Preservation, and Dilemmas.” _The Library Quarterly: Information, Community, Policy_ 80 (1): 61–79.
77. Courant, Paul N. 2006. “Scholarship and Academic Libraries (and Their Kin) in the World of Google.” _First Monday_ 11 (8).
78. Coyle, Karen. 2006. “Mass Digitization of Books.” _Journal of Academic Librarianship_ 32 (6): 641–645.
79. Darnton, Robert. 2009. _The Case for Books: Past, Present, and Future_. New York: Public Affairs.
80. Daston, Lorraine. 2012. “The Sciences of the Archive.” _Osiris_ 27 (1): 156–187.
81. Davison, Graeme. 2009. “Speed-Relating: Family History in a Digital Age.” _History Australia_ 6 (2). .
82. Deegan, Marilyn, and Kathryn Sutherland. 2009. _Transferred Illusions: Digital Technology and the Forms of Print_. Farnham, UK: Ashgate.
83. de la Durantaye, Katharine. 2011. “H Is for Harmonization: The Google Book Search Settlement and Orphan Works Legislation in the European Union.” _New York Law School Law Review_ 55 (1): 157–174.
84. DeLanda, Manuel. 2006. _A New Philosophy of Society: Assemblage Theory and Social Complexity_. London: Continuum.
85. Deleuze, Gilles. 1997. “Postscript on Control Societies.” In _Negotiations 1972–1990_ , 177–182. New York: Columbia University Press.
86. Deleuze, Gilles. 2013. _Difference and Repetition_. London: Bloomsbury Academic.
87. Deleuze, Gilles, and Félix Guattari. 1987. _A Thousand Plateaus: Capitalism and Schizophrenia_. Minneapolis: University of Minnesota Press.
88. DeNardis, Laura. 2011. _Opening Standards: The Global Politics of Interoperability_. Cambridge, MA: MIT Press.
89. DeNardis, Laura. 2014. “The Social Media Challenge to Internet Governance.” In _Society and the Internet: How Networks of Information and Communication Are Changing Our Lives_ , eds. Mark Graham and William H. Dutton. Oxford: Oxford University Press.
90. Derrida, Jacques. 1996. _Archive Fever: A Freudian Impression_. Chicago: University of Chicago Press.
91. Derrida, Jacques. 2005. _Paper Machine_. Stanford, CA: Stanford University Press.
92. Dewey, Melvin. 1926. “Our Next Half-Century.” _Bulletin of the American Library Association_ 20 (10): 309–312.
93. Dinshaw, Carolyn. 2012. _How Soon Is Now?: Medieval Texts, Amateur Readers, and the Queerness of Time_. Durham, NC: Duke University Press.
94. Doob, Penelope Reed. 1994. _The Idea of the Labyrinth: From Classical Antiquity Through the Middle Ages_. Ithaca, NY: Cornell University Press.
95. Dörk, Marian, Sheelagh Carpendale, and Carey Williamson. 2011. “The Information flaneur: A Fresh Look at Information Seeking.” _Conference on Human Factors in Computing Systems—Proceedings_ , 1215–1224.
96. Doward, Jamie. 2009. “Angela Merkel Attacks Google’s Plans to Create a Global Online Library.” _Guardian_ , October 11. .
97. Duguid, Paul. 2007. “Inheritance and Loss? A Brief Survey of Google Books.” _First Monday_ 12 (8). .
98. Earnshaw, Rae A., and John Vince. 2007. _Digital Convergence: Libraries of the Future_. London: Springer.
99. Easley, David, and Jon Kleinberg. 2010. _Networks, Crowds, and Markets: Reasoning About a Highly Connected World_. New York: Cambridge University Press.
100. Easterling, Keller. 2014. _Extrastatecraft: The Power of Infrastructure Space_. Verso.
101. Eckstein, Lars, and Anja Schwarz. 2014. _Postcolonial Piracy: Media Distribution and Cultural Production in the Global South_. London: Bloomsbury.
102. Eco, Umberto. 2014. _The Name of the Rose_. Boston: Mariner Books.
103. Edwards, Paul N. 2003. “Infrastructure and Modernity: Force, Time and Social Organization in the History of Sociotechnical Systems.” In _Modernity and Technology_ , eds. Thomas J. Misa, Philip Brey, and Andrew Feenberg. Cambridge, MA: MIT Press.
104. Edwards, Paul N., Steven J. Jackson, Melissa K. Chalmers, Geoffrey C. Bowker, Christine L. Borgman, David Ribes, Matt Burton, and Scout Calvert. 2012. _Knowledge Infrastructures: Intellectual Frameworks and Research Challenges_. Report of a workshop sponsored by the National Science Foundation and the Sloan Foundation University of Michigan School of Information, May 25–28. .
105. Ensmenger, Nathan. 2012. _The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise_. Cambridge, MA: MIT Press.
106. Eyal, Nir. 2014. _Hooked: How to Build Habit-Forming Products_. Princeton, NJ: Princeton University Press.
107. Featherstone, Mike. 1998. “The flaneur, the City and Virtual Public Life.” _Urban Studies (Edinburgh, Scotland)_ 35 (5–6): 909–925.
108. Featherstone, Mike. 2000. “Archiving Cultures.” _British Journal of Sociology_ 51 (1): 161–184.
109. Fiske, John. 1987. _Television Culture_. London: Methuen.
110. Fjeld, Per Olaf, and Sverre Fehn. 2009. _Sverre Fehn: The Pattern of Thoughts_. New York: Monacelli Press.
111. Flyverbom, Mikkel, Paul M. Leonardi, Cynthia Stohl, and Michael Stohl. 2016. “The Management of Visibilities in the Digital Age.” _International Journal of Communication_ 10 (1): 98–109.
112. Foucault, Michel. 2002. _Archaeology of Knowledge_. London: Routledge.
113. Foucault, Michel. 2004. _Death and the Labyrinth: The World of Raymond Roussel_. Continuum International Publishing Group Ltd.
114. Foucault, Michel. 2009. _Security, Territory, Population: Lectures at the College de France, 1977–1978_. Basingstoke, UK: Palgrave Macmillan.
115. Fredriksson, Martin, and James Arvanitakis. 2014. _Piracy: Leakages from Modernity_. Sacramento, CA: Litwin Books.
116. Freedgood, Elaine. 2013. “Divination.” _PMLA_ 128 (1): 221–225.
117. Fuchs, Christian. 2014. _Digital Labour and Karl Marx_. New York: Routledge.
118. Fuller, Matthew, and Andrew Goffey. 2012. _Evil Media_. Cambridge, MA: MIT Press.
119. Galloway, Alexander R. 2013a. _The Interface Effect_. Cambridge: Polity Press.
120. Galloway Alexander, R. 2013b. “The Poverty of Philosophy: Realism and Post-Fordism.” _Critical Inquiry_ 39 (2): 347–366.
121. Gardner, Carolyn Caffrey, and Gabriel J. Gardner. 2017. “Fast and Furious (at Publishers): The Motivations behind Crowdsourced Research Sharing.” _College & Research Libraries_ 78 (2): 131–149.
122. Garrett, Jeffrey. 1999. “Redefining Order in the German Library, 1775–1825.” _Eighteenth-Century Studies_ 33 (1): 103–123.
123. Gibbon, Peter, and Lasse F. Henriksen. 2012. “A Standard Fit for Neoliberalism.” _Comparative Studies in Society and History_ 54 (2): 275–307.
124. Giesler, Markus. 2006. “Consumer Gift Systems.” _Journal of Consumer Research_ 33 (2): 283–290.
125. Gießmann, Sebastian. 2015. _Medien Der Kooperation_. Siegen, Germany: Universitet Verlag.
126. Gillespie, Tarleton. 2010. “The Politics of ‘Platforms.’” _New Media & Society_ 12 (3): 347–364.
127. Gillespie, Tarleton. 2017. “Algorithmically Recognizable: Santorum’s Google Problem, and Google’s Santorum Problem.” _Information Communication and Society_ 20 (1): 63–80.
128. Gladwell, Malcolm. 2000. _The Tipping Point: How Little Things Can Make a Big Difference_. Boston: Little, Brown.
129. Goldate, Steven. 1996. “The Cyberflaneur: Spaces and Places on the Internet.” _Art Monthly Australia_ 91:15–18.
130. Goldsmith, Jack L., and Tim Wu. 2006. _Who Controls the Internet?: Illusions of a Borderless World_. New York: Oxford University Press.
131. Goldsmith, Kenneth. 2007. “UbuWeb Wants to Be Free.” Last modified July 18, 2007. .
132. Golumbia, David. 2009. _The Cultural Logic of Computation_. Cambridge, MA: Harvard University Press.
133. Goriunova, Olga. 2012. _Art Platforms and Cultural Production on the Internet_. New York: Routledge.
134. Gradmann, Stephan. 2009. “Interoperability: A Key Concept for Large Scale, Persistent Digital Libraries.” 1st DL.org Workshop at 13th European Conference on Digital Libraries (ECDL).
135. Greene, Mark. 2010. “MPLP: It’s Not Just for Processing Anymore.” _American Archivist_ 73 (1): 175–203.
136. Grewal, David S. 2008. _Network Power: The Social Dynamics of Globalization_. New Haven, CT: Yale University Press.
137. Hacking, Ian. 1995. _Rewriting the Soul: Multiple Personality and the Sciences of Memory_. Princeton, NJ: Princeton University Press.
138. Hacking, Ian. 2010. _The Taming of Chance_. Cambridge: Cambridge University Press.
139. Hagel, John. 2012. _The Power of Pull: How Small Moves, Smartly Made, Can Set Big Things in Motion_. New York: Basic Books.
140. Haggerty, Kevin D, and Richard V. Ericson. 2000. “The Surveillant Assemblage.” _British Journal of Sociology_ 51 (4): 605–622.
141. Hall, Gary. 2008. _Digitize This Book!: The Politics of New Media, or Why We Need Open Access Now_. Minneapolis: University of Minnesota Press.
142. Hall, Mark, et al. 2012. “PATHS—Exploring Digital Cultural Heritage Spaces.” In _Theory and Practice of Digital Libraries. TPDL 2012_ , vol. 7489, 500–503. Lecture Notes in Computer Science. Berlin: Springer.
143. Hall, Stuart, and Fredric Jameson. 1990. “Clinging to the Wreckage: a Conversation.” _Marxism Today_ (September): 28–31.
144. Hardt, Michael, and Antonio Negri. 2007. _Empire_. Cambridge, MA: Harvard University Press.
145. Hardt, Michael, and Antonio Negri. 2009. _Commonwealth_. Cambridge, MA: Harvard University Press.
146. Hartmann, Maren. 1999. “The Unknown Artificial Metaphor or: The Difficult Process of Creation or Destruction.” In _Next Cyberfeminist International_ , ed. Cornelia Sollfrank. Hamburg, Germany: obn. .
147. Hartmann, Maren. 2004. _Technologies and Utopias: The Cyberflaneur and the Experience of “Being Online.”_ Munich: Fischer.
148. Hayles, N. Katherine. 1993. “Seductions of Cyberspace.” In _Lost in Cyberspace: Essays and Far-Fetched Tales_ , ed. Val Schaffner. Bridgehampton, NY: Bridge Works Pub. Co.
149. Hayles, N. Katherine. 2005. _My Mother Was a Computer: Digital Subjects and Literary Texts_. Chicago: University of Chicago Press.
150. Helmond, Anne. 2015. “The Platformization of the Web: Making Web Data Platform Ready.” _Social Media + Society_ 1 (2). .
151. Hicks, Marie. 2018. _Programmed Inequality: How Britain Discarded Women Technologists and Lost its Edge in Computing_. Cambridge, MA: MIT Press.
152. Higgins, Vaughan, and Wendy Larner. 2010. _Calculating the Social: Standards and the Reconfiguration of Governing_. Basingstoke, UK: Palgrave Macmillan.
153. Holzer, Boris, and P. S. Mads. 2003. “Rethinking Subpolitics: Beyond the ‘Iron Cage’ of Modern Politics?” _Theory, Culture & Society_ 20 (2): 79–102.
154. Huyssen, Andreas. 2015. _Miniature Metropolis: Literature in an Age of Photography and Film_. Cambridge, MA: Harvard University Press.
155. Imerito, Tom. 2009. “Electrifying Knowledge.” _Pittsburgh Quarterly Magazine_. Summer. .
156. Janssen, Olaf. D. 2011. “Digitizing All Dutch Books, Newspapers and Magazines—730 Million Pages in 20 Years—Storing It, and Getting It Out There.” In _Research and Advanced Technology for Digital Libraries_ , eds. S. Gradmann, F. Borri, C. Meghini, and H. Schuldt, 473–476. TPDL 2011. Lecture Notes in Computer Science, vol. 6966. Berlin: Springer.
157. Jasanoff, Sheila. 2013. “Epistemic Subsidiarity—Coexistence, Cosmopolitanism, Constitutionalism.” _European Journal of Risk Regulation_ 4 (2) 133–141.
158. Jeanneney, Jean N. 2007. _Google and the Myth of Universal Knowledge: A View from Europe_. Chicago: University of Chicago Press.
159. Jones, Elisabeth A., and Joseph W. Janes. 2010. “Anonymity in a World of Digital Books: Google Books, Privacy, and the Freedom to Read.” _Policy & Internet_ 2 (4): 43–75.
160. Jøsevold, Roger. 2016. “A National Library for the 21st Century—Knowledge and Cultural Heritage Online.” _Alexandria_ _:_ _The_ _Journal of National and International Library and Information Issues_ 26 (1): 5–14.
161. Kang, Minsoo. 2011. _Sublime Dreams of Living Machines: The Automaton in the European Imagination_. Cambridge, MA: Harvard University Press.
162. Karaganis, Joe. 2011. _Media Piracy in Emerging Economies_. New York: Social Science Research Council.
163. Karaganis, Joe. 2018. _Shadow Libraries: Access to Educational Materials in Global Higher Education_. Cambridge, MA: MIT Press.
164. Kaufman, Peter B., and Jeff Ubois. 2007. “Good Terms—Improving Commercial-Noncommercial Partnerships for Mass Digitization.” _D-Lib Magazine_ 13 (11–12). .
165. Kelley, Robin D. G. 1994. _Race Rebels: Culture, Politics, and the Black Working Class_. New York: Free Press.
166. Kelly, Kevin. 1994. _Out of Control: The Rise of Neo-Biological Civilization_. Reading, MA: Addison-Wesley.
167. Kenney, Anne R, Nancy Y. McGovern, Ida T. Martinez, and Lance J. Heidig. 2003. “Google Meets Ebay: What Academic Librarians Can Learn from Alternative Information Providers." D-lib Magazine, 9 (6) .
168. Kiriya, Ilya. 2012. “The Culture of Subversion and Russian Media Landscape.” _International Journal of Communication_ 6 (1): 446–466.
169. Koevoets, Sanne. 2013. _Into the Labyrinth of Knowledge and Power: The Library as a Gendered Space in the Western Imaginary_. Utrecht, the Netherlands: Utrecht University.
170. Kolko, Joyce. 1988. _Restructuring the World Economy_. New York: Pantheon Books.
171. Komaromi, Ann. 2012. “Samizdat and Soviet Dissident Publics.” _Slavic Review_ 71 (1): 70–90.
172. Kramer, Bianca. 2016a. “Sci-Hub: Access or Convenience? A Utrecht Case Study, Part 1.” _I &M / I&O 2.0_, June 20. .
173. Kramer, Bianca. 2016b. “Sci-Hub: Access or Convenience? A Utrecht Case Study, Part 2.” .
174. Krysa, Joasia. 2006. _Curating Immateriality: The Work of the Curator in the Age of Network Systems_. Brooklyn, NY: Autonomedia.
175. Kurgan, Laura. 2013. _Close up at a Distance: Mapping, Technology, and Politics_. Brooklyn, NY: Zone Books.
176. Labi, Aisha. 2005. “France Plans to Digitize Its ‘Cultural Patrimony’ and Defy Google’s ‘Domination.’” _Chronicle of Higher Education_ (March): 21.
177. Larkin, Brian. 2008. _Signal and Noise: Media, Infrastructure, and Urban Culture in Nigeria_. Durham, NY: Duke University Press.
178. Latour, Bruno. 2005. _Reassembling the Social: An Introduction to Actor-Network Theory_. Oxford: Oxford University Press.
179. Latour, Bruno. 2007. “Beware, Your Imagination Leaves Digital Traces.” _Times Higher Literary Supplement_ , April 6.
180. Latour, Bruno. 2008. _What Is the Style of Matters of Concern?: Two Lectures in Empirical Philosophy_. Assen, the Netherlands: Koninklijke Van Gorcum.
181. Lavoie, Brian F., and Lorcan Dempsey. 2004. “Thirteen Ways of Looking at Digital Preservation.” _D-Lib Magazine_ 10 (July/August). .
182. Leetaru, Kalev. 2008. “Mass Book Digitization: The Deeper Story of Google Books and the Open Content Alliance.” _First Monday_ 13 (10). .
183. Lefebvre, Henri. 2009. _The Production of Space_. Malden, MA: Blackwell.
184. Lefler, Rebecca. 2007. “‘Europeana’ Ready for Maiden Voyage.” _Hollywood Reporter_ , March 23. .
185. Lessig, Lawrence. 2005a. “Lawrence Lessig on Interoperability.” _Creative Commons_ , October 19. .
186. Lessig, Lawrence. 2005b. _Free Culture: The Nature and Future of Creativity_. New York: Penguin Books.
187. Lessig, Lawrence. 2010. “For the Love of Culture—Will All of Our Literary Heritage Be Available to Us in the Future? Google, Copyright, and the Fate of American Books. _New Republic_ 24\. .
188. Levy, Steven. 2011. _In the Plex: How Google Thinks, Works, and Shapes Our Lives_. New York: Simon & Schuster.
189. Lewis, Jane. 1987. _Labour and Love: Women’s Experience of Home and Family, 1850–1940_. Oxford: Blackwell.
190. Liang, Lawrence. 2009. “Piracy, Creativity and Infrastructure: Rethinking Access to Culture,” July 20.
191. Liu, Jean. 2013. “Interactions: The Numbers Behind #ICanHazPDF.” _Altmetric_ , May 9. .
192. Locke, John. 2003. _Two Treatises of Government: And a Letter Concerning Toleration_. New Haven, CT: Yale University Press.
193. Martin, Andrew, and George Ross. 2004. _Euros and Europeans: Monetary Integration and the European Model of Society_. New York: Cambridge University Press.
194. Mbembe, Achille. 2002. “The Power of the Archive and its Limits.” In _Refiguring the Archive_ , ed. Carolyn Hamilton. Cape Town, South Africa: David Philip.
195. McDonough, Jerome. 2009. “XML, Interoperability and the Social Construction of Markup Languages: The Library Example.” _Digital Humanities Quarterly_ 3 (3). .
196. McPherson, Tara. 2012. “U.S. Operating Systems at Mid-Century: The Intertwining of Race and UNIX.” In _Race After the Internet_ , eds. Lisa Nakamura and Peter Chow-White. New York: Routledge.
197. Meckler, Alan M. 1982. _Micropublishing: A History of Scholarly Micropublishing in America, 1938–1980_. Westport, CT: Greenwood Press.
198. Medak, Tomislav, et al. 2016. _The Radiated Book_. .
199. Merton, Robert K., and Elinor Barber. 2004. _The Travels and Adventures of Serendipity: A Study in Sociological Semantics and the Sociology of Science_. Princeton, NJ: Princeton University Press.
200. Meunier, Sophie. 2003. “France’s Double-Talk on Globalization.” _French Politics, Culture & Society_ 21:20–34.
201. Meunier, Sophie. 2007. “The Distinctiveness of French Anti-Americanism.” In _Anti-Americanisms in World Politics_ , eds. Peter J. Katzenstein and Robert O. Keohane. Ithaca, NY: Cornell University Press.
202. Michel, Jean-Baptiste, et al. 2011. “Quantitative Analysis of Culture Using Millions of Digitized Books.” _Science_ 331 (6014):176–182.
203. Midbon, Mark. 1980. “Capitalism, Liberty, and the Development of the Development of the Library.” _Journal of Library History (Tallahassee, Fla.)_ 15 (2): 188–198.
204. Miksa, Francis L. 1983. _Melvil Dewey: The Man and the Classification_. Albany, NY: Forest Press.
205. Mitropoulos, Angela. 2012. _Contract and Contagion: From Biopolitics to Oikonomia_. Brooklyn, NY: Minor Compositions.
206. Mjør, Kåre Johan. 2009. “The Online Library and the Classic Literary Canon in Post-Soviet Russia: Some Observations on ‘The Fundamental Electronic Library of Russian Literature and Folklore.’” _Digital Icons: Studies in Russian, Eurasian and Central European New Media_ 1 (2): 83–99.
207. Montagnani, Maria Lillà, and Maurizio Borghi. 2008. “Promises and Pitfalls of the European Copyright Law Harmonisation Process.” In _The European Union and the Culture Industries: Regulation and the Public Interest_ , ed. David Ward. Aldershot, UK: Ashgate.
208. Murrell, Mary. 2017. “Unpacking Google’s Library.” _Limn_ (6). .
209. Nakamura, Lisa. 2002. _Cybertypes: Race, Ethnicity, and Identity on the Internet_. New York: Routledge.
210. Nakamura, Lisa. 2013. “‘Words with Friends’: Socially Networked Reading on Goodreads.” _PMLA_ 128 (1): 238–243.
211. Nava, Mica, and Alan O’Shea. 1996. _Modern Times: Reflections on a Century of English Modernity_ , 38–76. London: Routledge.
212. Negroponte, Nicholas. 1995. _Being Digital_. New York: Knopf.
213. Neubert, Michael. 2008. “Google’s Mass Digitization of Russian-Language Books.” _Slavic & East European Information Resources_ 9 (1): 53–62.
214. Nicholson, William. 1819. “Platform.” In _British Encyclopedia: Or, Dictionary of Arts and Sciences, Comprising an Accurate and Popular View of the Present Improved State of Human Knowledge_. Philadelphia: Mitchell, Ames, and White.
215. Niggemann, Elisabeth. 2011. _The New Renaissance: Report of the “Comité Des Sages.”_ Brussels: Comité des Sages.
216. Noble, Safiya Umoja, and Brendesha M. Tynes. 2016. _The Intersectional Internet: Race, Sex, Class and Culture Online_. New York: Peter Lang Publishing.
217. Nord, Deborah Epstein. 1995. _Walking the Victorian Streets: Women, Representation, and the City_. Ithaca, NY: Cornell University Press.
218. Norvig, Peter. 2012. “Colorless Green Ideas Learn Furiously: Chomsky and the Two Cultures of Statistical Learning.” _Significance_ (August): 30–33.
219. O’Neill, Paul, and Søren Andreasen. 2011. _Curating Subjects_. London: Open Editions.
220. O’Neill, Paul, and Mick Wilson. 2010. _Curating and the Educational Turn_. London: Open Editions.
221. Ong, Aihwa, and Stephen J. Collier. 2005. _Global Assemblages: Technology, Politics, and Ethics As Anthropological Problems_. Malden, MA: Blackwell Pub.
222. Otlet, Paul, and W. Boyd Rayward. 1990. _International Organisation and Dissemination of Knowledge_. Amsterdam: Elsevier.
223. Palfrey, John G. 2015. _Bibliotech: Why Libraries Matter More Than Ever in the Age of Google_. New York: Basic Books.
224. Palfrey, John G., and Urs Gasser. 2012. _Interop: The Promise and Perils of Highly Interconnected Systems_. New York: Basic Books.
225. Parisi, Luciana. 2004. _Abstract Sex: Philosophy, Bio-Technology and the Mutations of Desire_. London: Continuum.
226. Patra, Nihar K., Bharat Kumar, and Ashis K. Pani. 2014. _Progressive Trends in Electronic Resource Management in Libraries_. Hershey, PA: Information Science Reference.
227. Paulheim, Heiko. 2015. “What the Adoption of Schema.org Tells About Linked Open Data.” _CEUR Workshop Proceedings_ 1362:85–90.
228. Peatling, G. K. 2004. “Public Libraries and National Identity in Britain, 1850–1919.” _Library History_ 20 (1): 33–47.
229. Pechenick, Eitan A., Christopher M. Danforth, Peter S. Dodds, and Alain Barrat. 2015. “Characterizing the Google Books Corpus: Strong Limits to Inferences of Socio-Cultural and Linguistic Evolution.” _PLoS One_ 10 (10).
230. Peters, John Durham. 2015. _The Marvelous Clouds: Toward a Philosophy of Elemental Media_. Chicago: University of Chicago Press.
231. Pfanner, Eric. 2011. “Quietly, Google Puts History Online.” _New York Times_ , November 20.
232. Pfanner, Eric. 2012. “Google to Announce Venture With Belgian Museum.” _New York Times_ , March 12. .
233. Philip, Kavita. 2005. “What Is a Technological Author? The Pirate Function and Intellectual Property.” _Postcolonial Studies: Culture, Politics, Economy_ 8 (2): 199–218.
234. Pine, Joseph B., and James H. Gilmore. 2011. _The Experience Economy_. Boston: Harvard Business Press.
235. Ping-Huang, Marianne. 2016. “Archival Biases and Cross-Sharing.” _NTIK_ 5 (1): 55–56.
236. Pollock, Griselda. 1998. “Modernity and the Spaces of Femininity.” In _Vision and Difference: Femininity, Feminism and Histories of Art_ , ed. Griselda Pollock, 245–256. London: Routledge & Kegan Paul.
237. Ponte, Stefano, Peter Gibbon, and Jakob Vestergaard. 2011. _Governing Through Standards: Origins, Drivers and Limitations_. Basingstoke, UK: Palgrave Macmillan.
238. Pörksen, Uwe. 1995. _Plastic Words: The Tyranny of a Modular Language_. University Park: Pennsylvania State University Press.
239. Proctor, Nancy. 2013. “Crowdsourcing—an Introduction: From Public Goods to Public Good.” _Curator_ 56 (1): 105–106.
240. Puar, Jasbir K. 2007. _Terrorist Assemblages: Homonationalism in Queer Times_. Durham, NC: Duke University Press.
241. Purdon, James. 2016. _Modernist Informatics: Literature, Information, and the State_. New York: Oxford University Press.
242. Putnam, Robert D. 1988. “Diplomacy and Domestic Politics: The Logic of Two-Level Games.” _International Organization_ 42 (3): 427–460.
243. Rabinow, Paul. 2003. _Anthropos Today: Reflections on Modern Equipment_. Princeton, NJ: Princeton University Press.
244. Rabinow, Paul, and Michel Foucault. 2011. _The Accompaniment: Assembling the Contemporary_. Chicago: University of Chicago Press.
245. Raddick, M., et al. 2009. “Galaxy Zoo: Exploring the Motivations of Citizen Science Volunteers.” _Astronomy Education Review_ 9 (1).
246. Ratto, Matt, and Boler Megan. 2014. _DIY Citizenship: Critical Making and Social Media_. Cambridge, MA: MIT Press.
247. Reichardt, Jasia. 1969. _Cybernetic Serendipity: The Computer and the Arts_. New York: Frederick A Praeger. .
248. Ridge, Mia. 2013. “From Tagging to Theorizing: Deepening Engagement with Cultural Heritage through Crowdsourcing.” _Curator_ 56 (4): 435–450.
249. Rieger, Oya Y. 2008. _Preservation in the Age of Large-Scale Digitization: A White Paper_. Washington, DC: Council on Library and Information Resources.
250. Rodekamp, Volker, and Bernhard Graf. 2012. _Museen zwischen Qualität und Relevanz: Denkschrift zur Lage der Museen_. Berlin: G+H Verlag.
251. Rogers, Richard. 2012. “Mapping and the Politics of Web Space.” _Theory, Culture & Society_ 29:193–219.
252. Romeo, Fiona, and Lucinda Blaser. 2011. “Bringing Citizen Scientists and Historians Together.” Museums and the Web. .
253. Russell, Andrew L. 2014. _Open Standards and the Digital Age: History, Ideology, and Networks_. New York: Cambridge University Press.
254. Said, Edward. 1983. “Traveling Theory.” In _The World, the Text, and the Critic_ , 226–247. Cambridge, MA: Harvard University Press.
255. Samimian-Darash, Limor, and Paul Rabinow. 2015. _Modes of Uncertainty: Anthropological Cases_. Chicago: The University of Chicago Press.
256. Samuel, Henry. 2009. “Nicolas Sarkozy Fights Google over Classic Books.” _The Telegraph_ , December 14. .
257. Samuelson, Pamela. 2010. “Google Book Search and the Future of Books in Cyberspace.” _Minnesota Law Review_ 94 (5): 1308–1374.
258. Samuelson, Pamela. 2011. “Why the Google Book Settlement Failed—and What Comes Next?” _Communications of the ACM_ 54 (11): 29–31.
259. Samuelson, Pamela. 2014. “Mass Digitization as Fair Use.” _Communications of the ACM_ 57 (3): 20–22.
260. Samyn, Jeanette. 2012. “Anti-Anti-Parasitism.” _The New Inquiry_ , September 18.
261. Sanderhoff, Merethe. 2014. _Sharing Is Caring: Åbenhed Og Deling I Kulturarvssektoren_. Copenhagen: Statens Museum for Kunst.
262. Sassen, Saskia. 2008. _Territory, Authority, Rights: From Medieval to Global Assemblages_. Princeton, NJ: Princeton University Press.
263. Schmidt, Henrike. 2009. “‘Holy Cow’ and ‘Eternal Flame’: Russian Online Libraries.” _Kultura_ 1, 4–8. .
264. Schmitz, Dawn. 2008. _The Seamless Cyberinfrastructure: The Challenges of Studying Users of Mass Digitization and Institutional Repositories_. Washington, DC: Digital Library Federation, Council on Library and Information Resources.
265. Schonfeld, Roger, and Liam Sweeney. 2017. “Inclusion, Diversity, and Equity: Members of the Association of Research Libraries.” _Ithaka S+R_ , August 30. .
266. Schüll, Natasha Dow. 2014. _Addiction by Design: Machine Gambling in Las Vegas_. Princeton, NJ: Princeton University Press.
267. Scott, James C. 2009. _Domination and the Arts of Resistance: Hidden Transcripts_. New Haven, CT: Yale University Press.
268. Seddon, Nicholas. 2013. _Government Contracts: Federal, State and Local_. Annandale, Australia: The Federation Press.
269. Serres, Michel. 2013. _The Parasite_. Minneapolis: University of Minnesota Press.
270. Sherratt, Tim. 2013. “From Portals to Platforms: Building New Frameworks for User Engagement.” National Library of Australia, November 5. .
271. Shukaitis, Stevphen. 2009. “Infrapolitics and the Nomadic Educational Machine.” In _Contemporary Anarchist Studies: An Introductory Anthology of Anarchy in the Academy_ , ed. Randall Amster. London: Routledge.
272. Smalls, James. 2003. “‘Race’ As Spectacle in Late-Nineteenth-Century French Art and Popular Culture.” _French Historical Studies_ 26 (2): 351–382.
273. Snyder, Francis. 2002. “Governing Economic Globalisation: Global Legal Pluralism and EU Law.” In _Regional and Global Regulation of International Trade_ , 1–47. Oxford: Hart Publishing.
274. Solá-Morales, Rubió I. 1999. _Differences: Topographies of Contemporary Architecture_. Cambridge, MA: MIT Press.
275. Sollfrank, Cornelia. 2015. “Nothing New Needs to Be Created. Kenneth Goldsmith’s Claim to Uncreativity.” In _No Internet—No Art. A Lunch Byte Anthology_ , ed. Melanie Bühler. Eindhoven: Onomatopee. .
276. Somers, Margaret R. 2008. _Genealogies of Citizenship: Markets, Statelessness, and the Right to Have Rights_. Cambridge: Cambridge University Press.
277. Sparks, Peter G. 1992. _A Roundtable on Mass Deacidification._ Report on a Meeting Held September 12–13, 1991, in Andover, Massachusetts. Washington, DC: Association of Research Libraries.
278. Spivak, Gayatri C. 2000. “Megacity.” _Grey Room_ 1 (1): 8–25.
279. Srnicek, Nick. 2017. _Platform Capitalism_. Cambridge: Polity Press.
280. Stanley, Amy D. 1998. _From Bondage to Contract: Wage Labor, Marriage, and the Market in the Age of Slave Emancipation_. Cambridge: Cambridge University Press.
281. Stelmakh, Valeriya D. 2008. “Book Saturation and Book Starvation: The Difficult Road to a Modern Library System.” _Kultura_ , September 4.
282. Stiegler, Bernard. n.d. “Amateur.” Ars Industrialis: Association internationale pour une politique industrielle des technologies de l’esprit. .
283. Star, Susan Leigh. 1999. “The Ethnography of Infrastructure.” _American Behavioral Scientist_ 43 (3): 377–391.
284. Steyerl, Hito. 2012. “Defense of the Poor Image.” In _The Wretched of the Screen_. Berlin, Germany: Sternberg Presss.
285. Stiegler, Bernard. 2003. _Aimer, s’aimer, nous aimer_. Paris: Éditions Galilée.
286. Suchman, Mark C. 2003. “The Contract as Social Artifact.” _Law & Society Review_ 37 (1): 91–142.
287. Sumner, William G. 1952. _What Social Classes Owe to Each Other_. Caldwell, ID: Caxton Printers.
288. Tate, Jay. 2001. “National Varieties of Standardization.” In _Varieties of Capitalism: The Institutional Foundations of Comparative Advantage_ , ed. Peter A. Hall and David Soskice. Oxford: Oxford University Press.
289. Tawa, Michael. 2012. “Limits of Fluxion.” In _Architecture in the Space of Flows_ , eds. Andrew Ballantyne and Chris Smith. Abingdon, UK: Routledge.
290. Tay, J. S. W., and R. H. Parker. 1990. “Measuring International Harmonization and Standardization.” _Abacus_ 26 (1): 71–88.
291. Tenen, Dennis, and Maxwell Henry Foxman. 2014. “ _Book Piracy as Peer Preservation_.” Columbia University Academic Commons. doi: 10.7916/D8W66JHS.
292. Teubner, Gunther. 1997. _Global Law Without a State_. Aldershot, UK: Dartmouth.
293. Thussu, Daya K. 2007. _Media on the Move: Global Flow and Contra-Flow_. London: Routledge.
294. Tiffen, Belinda. 2007. “Recording the Nation: Nationalism and the History of the National Library of Australia.” _Australian Library Journal_ 56 (3): 342.
295. Tsilas, Nicos. 2011. “Open Innovation and Interoperability.” In _Opening Standards: The Global Politics of Interoperability_ , ed. Laura DeNardis. Cambridge, MA: MIT Press.
296. Tygstrup, Frederik. 2014. “The Politics of Symbolic Forms.” In _Cultural Ways of Worldmaking: Media and Narratives_ , ed. Ansgar Nünning, Vera Nünning, and Birgit Neumann. Berlin: De Gruyter.
297. Vaidhyanathan, Siva. 2011. _The Googlization of Everything: (and Why We Should Worry)_. Berkeley: University of California Press.
298. van Dijck, José. 2012. “Facebook as a Tool for Producing Sociality and Connectivity.” _Television & New Media_ 13 (2): 160–176.
299. Veel, Kristin. 2003. “The Irreducibility of Space: Labyrinths, Cities, Cyberspace.” _Diacritics_ 33:151–172.
300. Venn, Couze. 2006. “The Collection.” _Theory, Culture & Society_ 23:35–40.
301. Verhoeven, Deb. 2016. “As Luck Would Have It: Serendipity and Solace in Digital Research Infrastructure.” _Feminist Media Histories_ 2 (1): 7–28.
302. Vise, David A., and Mark Malseed. 2005. _The Google Story_. New York: Delacorte Press.
303. Voltaire. 1786. _Dictionaire Philosophique_ (Oeuvres Completes de Voltaire, Tome Trente-Huiteme). Gotha, Germany: Chez Charles Guillaume Ettinger, Librarie.
304. Vul, Vladimir Abramovich. 2003. “Who and Why? Bibliotechnoye Delo,” _Librarianship_ 2 (2). .
305. Walker, Kevin. 2006. “Story Structures: Building Narrative Trails in Museums.” In _Technology-Mediated Narrative Environments for Learning_ , eds. G. Dettori, T. Giannetti, A. Paiva, and A. Vaz, 103–114. Dordrecht: Sense Publishers.
306. Walker, Neil. 2003. _Sovereignty in Transition_. Oxford: Hart.
307. Weigel, Moira. 2016. _Labor of Love: The Invention of Dating_. New York: Farrar, Straus and Giroux.
308. Weiss, Andrew, and Ryan James. 2012. “Google Books’ Coverage of Hawai’i and Pacific Books.” _Proceedings of the American Society for Information Science and Technology_ 49 (1): 1–3.
309. Weizman, Eyal. 2006. “Lethal Theory.” _Log_ 7:53–77.
310. Wilson, Elizabeth. 1992. “The Invisible flaneur.” _New Left Review_ 191 (January–February): 90–110.
311. Wolf, Gary. 2003. “The Great Library of Amazonia.” _Wired_ , November.
312. Wolff, Janet. 1985. “The Invisible Flâneuse. Women and the Literature of Modernity.” _Theory, Culture & Society_ 2 (3): 37–46.
313. Yeo, Richard R. 2003. “A Solution to the Multitude of Books: Ephraim Chambers’s ‘Cyclopaedia’ (1728) as ‘the Best Book in the Universe.’” _Journal of the History of Ideas_ 64 (1): 61–72.
314. Young, Michael D. 1988. _The Metronomic Society: Natural Rhythms and Human Timetables_. Cambridge, MA: Harvard University Press.
315. Yurchak, Alexei. 1997. “The Cynical Reason of Late Socialism: Power, Pretense, and the Anekdot.” _Public Culture_ 9 (2): 161–188.
316. Yurchak, Alexei. 2006. _Everything Was Forever, Until It Was No More: The Last Soviet Generation_. Princeton, NJ: Princeton University Press.
317. Yurchak, Alexei. 2008. “Suspending the Political: Late Soviet Artistic Experiments on the Margins of the State.” _Poetics Today_ 29 (4): 713–733.
318. Žižek, Slavoj. 2009. _The Plague of Fantasies_. London: Verso.
319. Zuckerman, Ethan. 2008. “Serendipity, Echo Chambers, and the Front Page.” _Nieman Reports_ 62 (4). .

© 2018 Massachusetts Institute of Technology

All rights reserved. No part of this book may be reproduced in any form by any
electronic or mechanical means (including photocopying, recording, or
information storage and retrieval) without permission in writing from the
publisher.

This book was set in ITC Stone Sans Std and ITC Stone Serif Std by Toppan
Best-set Premedia Limited. Printed and bound in the United States of America.

Library of Congress Cataloging-in-Publication Data

Names: Thylstrup, Nanna Bonde, author.

Title: The politics of mass digitization / Nanna Bonde Thylstrup.

Description: Cambridge, MA : The MIT Press, [2018] | Includes bibliographical
references and index.

Identifiers: LCCN 2018010472 | ISBN 9780262039017 (hardcover : alk. paper)

eISBN 9780262350044

Subjects: LCSH: Library materials--Digitization. | Archival materials--
Digitization. | Copyright and digital preservation.

Classification: LCC Z701.3.D54 T49 2018 | DDC 025.8/4--dc23 LC record
available at

Dekker & Barok
Copying as a Way to Start Something New A Conversation with Dusan Barok about Monoskop
2017


COPYING AS A WAY TO START SOMETHING NEW
A Conversation with Dusan Barok about Monoskop

Annet Dekker

Dusan Barok is an artist, writer, and cultural activist involved
in critical practice in the fields of software, art, and theory. After founding and organizing the online culture portal
Koridor in Slovakia from 1999–2002, in 2003 he co-founded
the BURUNDI media lab where he organized the Translab
evening series. A year later, the first ideas about building an
online platform for texts and media started to emerge and
Monoskop became a reality. More than a decade later, Barok
is well-known as the main editor of Monoskop. In 2016, he
began a PhD research project at the University of Amsterdam. His project, titled Database for the Documentation of
Contemporary Art, investigates art databases as discursive
platforms that provide context for artworks. In an extended
email exchange, we discuss the possibilities and restraints
of an online ‘archive’.
ANNET DEKKER

You started Monoskop in 2004, already some time ago. What
does the name mean?
DUSAN BAROK

‘Monoskop’ is the Slovak equivalent of the English ‘monoscope’, which means an electric tube used in analogue TV
broadcasting to produce images of test cards, station logotypes, error messages but also for calibrating cameras. Monoscopes were automatized television announcers designed to
speak to both live and machine audiences about the status
of a channel, broadcasting purely phatic messages.
AD
Can you explain why you wanted to do the project and how it
developed to what it is now? In other words, what were your
main aims and have they changed? If so, in which direction
and what caused these changes?
DB

I began Monoskop as one of the strands of the BURUNDI
media lab in Bratislava. Originally, it was designed as a wiki
website for documenting media art and culture in the eastern part of Europe, whose backbone consisted of city entries
composed of links to separate pages about various events,

212

LOST AND LIVING (IN) ARCHIVES

initiatives, and individuals. In the early days it was modelled
on Wikipedia (which had been running for two years when
Monoskop started) and contained biographies and descriptions of events from a kind of neutral point of view. Over
the years, the geographic and thematic boundaries have
gradually expanded to embrace the arts and humanities in
their widest sense, focusing primarily on lesser-known
1
phenomena.1 Perhaps the biggest change is the ongoing
See for example
shift from mapping people, events, and places towards
https://monoskop.org/
Features. Accessed
synthesizing discourses.
28 May 2016.
A turning point occurred during my studies at the
Piet Zwart Institute, in the Networked Media programme
from 2010–2012, which combined art, design, software,
and theory with support in the philosophy of open source
and prototyping. While there, I was researching aspects of
the networked condition and how it transforms knowledge,
sociality and economics: I wrote research papers on leaking
as a technique of knowledge production, a critique of the
social graph, and on the libertarian values embedded in the
design of digital currencies. I was ready for more practice.
When Aymeric Mansoux, one of the tutors, encouraged me
to develop my then side-project Monoskop into a graduation
work, the timing was good.
The website got its own domain, a redesign, and most
crucially, the Monoskop wiki was restructured from its
2
focus on media art and culture towards the much wider
https://monoskop.org/
embrace
of the arts and humanities. It turned to a media
Symposium. Accessed
28 May 2016.
library of sorts. The graduation work also consisted of
a symposium about personal collecting and media ar3
chiving,2 which saw its loose follow-ups on media aeshttps://monoskop.org/
thetics (in Bergen)3 and on knowledge classification and
The_Extensions_of_
Many. Accessed
archives (in Mons)4 last year.
28 May 2016.

AD

https://monoskop.org/
Ideographies_of_
Knowledge. Accessed
28 May 2016.

Did you have a background in library studies, or have
you taken their ideas/methods of systemization and categorization (meta data)? If not, what are your methods
and how did you develop them?

213

COPYING AS A WAY TO START SOMETHING NEW

4

been an interesting process, clearly showing the influence
of a changing back-end system. Are you interested in the
idea of sharing and circulating texts as a new way not just
of accessing and distributing but perhaps also of production—and publishing? I’m thinking how Aaaaarg started as
a way to share and exchange ideas about a text. In what
way do you think Monoskop plays (or could play) with these
kinds of mechanisms? Do you think it brings out a new
potential in publishing?

DB

Besides the standard literature in information science (I
have a degree in information technologies), I read some
works of documentation scientists Paul Otlet and Suzanne
Briet, historians such as W. Boyd Rayward and Ronald E.
Day, as well as translated writings of Michel Pêcheux and
other French discourse analysts of the 1960s and 1970s.
This interest was triggered in late 2014 by the confluence
of Femke’s Mondotheque project and an invitation to be an
artist-in-residence in Mons in Belgium at the Mundaneum,
home to Paul Otlet’s recently restored archive.
This led me to identify three tropes of organizing and
navigating written records, which has guided my thinking
about libraries and research ever since: class, reference,
and index. Classification entails tree-like structuring, such
as faceting the meanings of words and expressions, and
developing classification systems for libraries. Referencing
stands for citations, hyperlinking and bibliographies. Indexing ranges from the listing of occurrences of selected terms
to an ‘absolute’ index of all terms, enabling full-text search.
With this in mind, I have done a number of experiments.
There is an index of selected persons and terms from
5
across the Monoskop wiki and Log.5 There is a growing
https://monoskop.org/
list of wiki entries with bibliographies and institutional
Index. Accessed
28 May 2016.
infrastructures of fields and theories in the humanities.6
There is a lexicon aggregating entries from some ten
6
dictionaries of the humanities into a single page with
https://monoskop.org/
hyperlinks to each full entry (unpublished). There is an
Humanities. Accessed
28 May 2016.
alternative interface to the Monoskop Log, in which entries are navigated solely through a tag cloud acting as
a multidimensional filter (unpublished). There is a reader
containing some fifty books whose mutual references are
turned into hyperlinks, and whose main interface consists
of terms specific to each text, generated through tf-idf algorithm (unpublished). And so on.

DB

The publishing market frames the publication as a singular
body of work, autonomous from other titles on offer, and
subjects it to the rules of the market—with a price tag and
copyright notice attached. But for scholars and artists, these
are rarely an issue. Most academic work is subsidized from
public sources in the first place, and many would prefer to
give their work away for free since openness attracts more
citations. Why they opt to submit to the market is for quality
editing and an increase of their own symbolic value in direct
proportion to the ranking of their publishing house. This
is not dissimilar from the music industry. And indeed, for
many the goal is to compose chants that would gain popularity across academia and get their place in the popular
imagination.
On the other hand, besides providing access, digital
libraries are also fit to provide context by treating publications as a corpus of texts that can be accessed through an
unlimited number of interfaces designed with an understanding of the functionality of databases and an openness
to the imagination of the community of users. This can
be done by creating layers of classification, interlinking
bodies of texts through references, creating alternative
indexes of persons, things and terms, making full-text
search possible, making visual search possible—across
the whole of corpus as well as its parts, and so on. Isn’t
this what makes a difference? To be sure, websites such
as Aaaaarg and Monoskop have explored only the tip of

AD

Indeed, looking at the archive in many alternative ways has

214

LOST AND LIVING (IN) ARCHIVES

215

COPYING AS A WAY TO START SOMETHING NEW

the iceberg of possibilities. There is much more to tinker
and hack around.

within a given text and within a discourse in which it is
embedded. What is specific to digital text, however, is that
we can search it in milliseconds. Full-text search is enabled
by the index—search engines operate thanks to bots that
assign each expression a unique address and store it in a
database. In this respect, the index usually found at the
end of a printed book is something that has been automated
with the arrival of machine search.
In other words, even though knowledge in the age of the
internet is still being shaped by the departmentalization of
academia and its related procedures and rituals of discourse
production, and its modes of expression are centred around
the verbal rhetoric, the flattening effects of the index really
transformed the ways in which we come to ‘know’ things.
To ‘write’ a ‘book’ in this context is to produce a searchable
database instead.

AD

It is interesting that whilst the accessibility and search potential has radically changed, the content, a book or any other
text, is still a particular kind of thing with its own characteristics and forms. Whereas the process of writing texts seems
hard to change, would you be interested in creating more
alliances between texts to bring out new bibliographies? In
this sense, starting to produce new texts, by including other
texts and documents, like emails, visuals, audio, CD-ROMs,
or even un-published texts or manuscripts?
DB

Currently Monoskop is compiling more and more ‘source’
bibliographies, containing digital versions of actual texts
they refer to. This has been very much in focus in the past
two or three years and Monoskop is now home to hundreds
of bibliographies of twentieth-century artists, writers, groups,
and movements as well as of various theories and human7
ities disciplines.7 As the next step I would like to move
See for example
on to enabling full-text search within each such biblioghttps://monoskop.
org/Foucault,
raphy. This will make more apparent that the ‘source’
https://monoskop.
bibliography
is a form of anthology, a corpus of texts
org/Lissitzky,
https://monoskop.
representing a discourse. Another issue is to activate
org/Humanities.
cross-references
within texts—to turn page numbers in
All accessed
28 May 2016.
bibliographic citations inside texts into hyperlinks leading
to other texts.
This is to experiment further with the specificity of digital text. Which is different both to oral speech and printed
books. These can be described as three distinct yet mutually
encapsulated domains. Orality emphasizes the sequence
and narrative of an argument, in which words themselves
are imagined as constituting meaning. Specific to writing,
on the other hand, is referring to the written record; texts
are brought together by way of references, which in turn
create context, also called discourse. Statements are ‘fixed’
to paper and meaning is constituted by their contexts—both

216

LOST AND LIVING (IN) ARCHIVES

AD

So, perhaps we finally have come to ‘the death of the author’,
at least in so far as that automated mechanisms are becoming active agents in the (re)creation process. To return to
Monoskop in its current form, what choices do you make
regarding the content of the repositories, are there things
you don’t want to collect, or wish you could but have not
been able to?
DB

In a sense, I turned to a wiki and started Monoskop as
a way to keep track of my reading and browsing. It is a
by-product of a succession of my interests, obsessions, and
digressions. That it is publicly accessible is a consequence
of the fact that paper notebooks, text files kept offline and
private wikis proved to be inadequate at the moment when I
needed to quickly find notes from reading some text earlier.
It is not perfect, but it solved the issue of immediate access
and retrieval. Plus there is a bonus of having the body of
my past ten or twelve years of reading mutually interlinked
and searchable. An interesting outcome is that these ‘notes’
are public—one is motivated to formulate and frame them

217

COPYING AS A WAY TO START SOMETHING NEW

as to be readable and useful for others as well. A similar
difference is between writing an entry in a personal diary
and writing a blog post. That is also why the autonomy
of technical infrastructure is so important here. Posting
research notes on Facebook may increase one’s visibility
among peers, but the ‘terms of service’ say explicitly that
anything can be deleted by administrators at any time,
without any reason. I ‘collect’ things that I wish to be able
to return to, to remember, or to recollect easily.
AD

Can you describe the process, how do you get the books,
already digitized, or do you do a lot yourself? In other words,
could you describe the (technical) process and organizational aspects of the project?
DB

In the beginning, I spent a lot of time exploring other digital
libraries which served as sources for most of the entries on
Log (Gigapedia, Libgen, Aaaaarg, Bibliotik, Scribd, Issuu,
Karagarga, Google filetype:pdf). Later I started corresponding with a number of people from around the world (NYC,
Rotterdam, Buenos Aires, Boulder, Berlin, Ploiesti, etc.) who
contribute scans and links to scans on an irregular basis.
Out-of-print and open-access titles often come directly from
authors and publishers. Many artists’ books and magazines
were scraped or downloaded through URL manipulation
from online collections of museums, archives and libraries.
Needless to say, my offline archive is much bigger than
what is on Monoskop. I tend to put online the files I prefer
not to lose. The web is the best backup solution I have
found so far.
The Monoskop wiki is open for everyone to edit; any user
can upload their own works or scans and many do. Many of
those who spent more time working on the website ended up
being my friends. And many of my friends ended up having
an account as well :). For everyone else, there is no record
kept about what one downloaded, what one read and for
how long... we don’t care, we don’t track.

218

LOST AND LIVING (IN) ARCHIVES

AD

In what way has the larger (free) publishing context changed
your project, there are currently several free texts sharing
initiatives around (some already before you started like Textz.
com or Aaaaarg), how do you collaborate, or distinguish
from each other?
DB

It should not be an overstatement to say that while in the
previous decade Monoskop was shaped primarily by the
‘media culture’ milieu which it intended to document, the
branching out of its repository of highlighted publications
Monoskop Log in 2009, and the broadening of its focus to
also include the whole of the twentieth and twenty-first
century situates it more firmly in the context of online
archives, and especially digital libraries.
I only got to know others in this milieu later. I approached
Sean Dockray in 2010, Marcell Mars approached me the
following year, and then in 2013 he introduced me to Kenneth Goldsmith. We are in steady contact, especially through
public events hosted by various cultural centres and galleries.
The first large one was held at Ljubljana’s hackerspace Kiberpipa in 2012. Later came the conferences and workshops
organized by Kuda at a youth centre in Novi Sad (2013), by
the Institute of Network Cultures at WORM, Rotterdam (2014),
WKV and Akademie Schloss Solitude in Stuttgart (2014),
Mama & Nova Gallery in Zagreb (2015), ECC at Mundaneum,
Mons (2015), and most recently by the Media Department
8
of the University of Malmo (2016).8
For more information see,
The leitmotif of all these events was the digital library
https://monoskop.org/
Digital_libraries#
and their atmosphere can be described as the spirit of
Workshops_and_
early
hacker culture that eventually left the walls of a
conferences.
Accessed 28 May 2016.
computer lab. Only rarely there have been professional
librarians, archivists, and publishers among the speakers, even though the voices represented were quite diverse.
To name just the more frequent participants... Marcell
and Tom Medak (Memory of the World) advocate universal
access to knowledge informed by the positions of the Yugoslav

219

COPYING AS A WAY TO START SOMETHING NEW

Marxist school Praxis; Sean’s work is critical of the militarization and commercialization of the university (in the
context of which Aaaaarg will always come as secondary, as
an extension of The Public School in Los Angeles); Kenneth
aims to revive the literary avant-garde while standing on the
shoulders of his heroes documented on UbuWeb; Sebastian
Lütgert and Jan Berger are the most serious software developers among us, while their projects such as Textz.com and
Pad.ma should be read against critical theory and Situationist cinema; Femke Snelting has initiated the collaborative
research-publication Mondotheque about the legacy of the
early twentieth century Brussels-born information scientist
Paul Otlet, triggered by the attempt of Google to rebrand him
as the father of the internet.
I have been trying to identify implications of the digital-networked textuality for knowledge production, including humanities research, while speaking from the position
of a cultural worker who spent his formative years in the
former Eastern Bloc, experiencing freedom as that of unprecedented access to information via the internet following
the fall of Berlin Wall. In this respect, Monoskop is a way
to bring into ‘archival consciousness’ what the East had
missed out during the Cold War. And also more generally,
what the non-West had missed out in the polarized world,
and vice versa, what was invisible in the formal Western
cultural canons.
There have been several attempts to develop new projects,
and the collaborative efforts have materialized in shared
infrastructure and introductions of new features in respective platforms, such as PDF reader and full-text search on
Aaaaarg. Marcell and Tom along with their collaborators have
been steadily developing the Memory of the World library and
Sebastian resuscitated Textz.com. Besides that, there are
overlaps in titles hosted in each library, and Monoskop bibliographies extensively link to scans on Libgen and Aaaaarg,
while artists’ profiles on the website link to audio and video
recordings on UbuWeb.

220

LOST AND LIVING (IN) ARCHIVES

AD

It is interesting to hear that there weren’t any archivist or
professional librarians involved (yet), what is your position
towards these professional and institutional entities and
persons?
DB

As the recent example of Sci-Hub showed, in the age of
digital networks, for many researchers libraries are primarily free proxies to corporate repositories of academic
9
journals.9 Their other emerging role is that of a digital
For more information see,
repository of works in the public domain (the role piowww.sciencemag.org/
news/2016/04/whosneered in the United States by Project Gutenberg and
downloading-piratedInternet Archive). There have been too many attempts
papers-everyone.
Accessed 28 May 2016.
to transpose librarians’ techniques from the paperbound
world into the digital domain. Yet, as I said before, there
is much more to explore. Perhaps the most exciting inventive approaches can be found in the field of classics, for
example in the Perseus Digital Library & Catalog and the
Homer Multitext Project. Perseus combines digital editions
of ancient literary works with multiple lexical tools in a way
that even a non-professional can check and verify a disputable translation of a quote. Something that is hard to
imagine being possible in print.
AD

I think it is interesting to see how Monoskop and other
repositories like it have gained different constituencies
globally, for one you can see the kind of shift in the texts
being put up. From the start you tried to bring in a strong
‘eastern European voice’, nevertheless at the moment the
content of the repository reflects a very western perspective on critical theory, what are your future goals. And do
you think it would be possible to include other voices? For
example, have you ever considered the possibility of users
uploading and editing texts themselves?
DB

The site certainly started with the primary focus on east-central European media art and culture, which I considered

221

COPYING AS A WAY TO START SOMETHING NEW

myself to be part of in the early 2000s. I was naive enough
to attempt to make a book on the theme between 2008–2010.
During that period I came to notice the ambivalence of the
notion of medium in an art-historical and technological
sense (thanks to Florian Cramer). My understanding of
media art was that it is an art specific to its medium, very
much in Greenbergian terms, extended to the more recent
‘developments’, which were supposed to range from neo-geometrical painting through video art to net art.
At the same time, I implicitly understood art in the sense
of ‘expanded arts’, as employed by the Fluxus in the early
1960s—objects as well as events that go beyond the (academic) separation between the arts to include music, film,
poetry, dance, design, publishing, etc., which in turn made
me also consider such phenomena as experimental film,
electro-acoustic music and concrete poetry.
Add to it the geopolitically unstable notion of East-Central
Europe and the striking lack of research in this area and
all you end up with is a headache. It took me a while to
realize that there’s no point even attempting to write a coherent narrative of the history of media-specific expanded
arts of East-Central Europe of the past hundred years. I
ended up with a wiki page outlining the supposed mile10
stones along with a bibliography.10
https://monoskop.
For this strand, the wiki served as the main notebook,
org/CEE. Accessed
28 May 2016. And
leaving behind hundreds of wiki entries. The Log was
https://monoskop.
more or less a ‘log’ of my research path and the presence
org/Central_and_
Eastern_Europe_
of ‘western’ theory is to a certain extent a by-product of
Bibliography.
my search for a methodology and theoretical references.
Accessed 28 May 2016.
As an indirect outcome, a new wiki section was
launched recently. Instead of writing a history of mediaspecific ‘expanded arts’ in one corner of the world, it takes
a somewhat different approach. Not a sequential text, not
even an anthology, it is an online single-page annotated
index, a ‘meta-encyclopaedia’ of art movements and styles,
intended to offer an expansion of the art-historical canonical
prioritization of the western painterly-sculptural tradition

222

LOST AND LIVING (IN) ARCHIVES

11

https://monoskop.
org/Art. Accessed
28 May 2016.

to also include other artists and movements around the
world.11
AD

Can you say something about the longevity of the project?
You briefly mentioned before that the web was your best
backup solution. Yet, it is of course known that websites
and databases require a lot of maintenance, so what will
happen to the type of files that you offer? More and more
voices are saying that, for example, the PDF format is all
but stable. How do you deal with such challenges?
DB

Surely, in the realm of bits, nothing is designed to last
forever. Uncritical adoption of Flash had turned out to be
perhaps the worst tragedy so far. But while there certainly
were more sane alternatives if one was OK with renouncing its emblematic visual effects and aesthetics that went
with it, with PDF it is harder. There are EPUBs, but scholarly publications are simply unthinkable without page
numbers that are not supported in this format. Another
challenge the EPUB faces is from artists' books and other
design- and layout-conscious publications—its simplified
HTML format does not match the range of possibilities for
typography and layout one is used to from designing for
paper. Another open-source solution, PNG tarballs, is not
a viable alternative for sharing books.
The main schism between PDF and HTML is that one represents the domain of print (easily portable, and with fixed
page size), while the other the domain of web (embedded
within it by hyperlinks pointing both directions, and with
flexible page size). EPUB is developed with the intention of
synthetizing both of them into a single format, but instead
it reduces them into a third container, which is doomed to
reinvent the whole thing once again.
It is unlikely that there will appear an ultimate convertor
between PDF and HTML, simply because of the specificities
of print and the web and the fact that they overlap only in
some respects. Monoskop tends to provide HTML formats

223

COPYING AS A WAY TO START SOMETHING NEW

next to PDFs where time allows. And if the PDF were to
suddenly be doomed, there would be a big conversion party.
On the side of audio and video, most media files on
Monoskop are in open formats—OGG and WEBM. There
are many other challenges: keeping up-to-date with PHP
and MySQL development, with the MediaWiki software
and its numerous extensions, and the mysterious ICANN
organization that controls the web domain.

as an imperative to us to embrace redundancy, to promote
spreading their contents across as many nodes and sites
as anyone wishes. We may look at copying not as merely
mirroring or making backups, but opening up for possibilities to start new libraries, new platforms, new databases.
That is how these came about as well. Let there be Zzzzzrgs,
Ůbuwebs and Multiskops.

AD

What were your biggest challenges beside technical ones?
For example, have you ever been in trouble regarding copyright issues, or if not, how would you deal with such a
situation?
DB

Monoskop operates on the assumption of making transformative use of the collected material. The fact of bringing
it into certain new contexts, in which it can be accessed,
viewed and interpreted, adds something that bookstores
don’t provide. Time will show whether this can be understood as fair use. It is an opt-out model and it proves to
be working well so far. Takedowns are rare, and if they are
legitimate, we comply.
AD

Perhaps related to this question, what is your experience
with users engagement? I remember Sean (from Aaaaarg,
in conversation with Matthew Fuller, Mute 2011) saying
that some people mirror or download the whole site, not
so much in an attempt to ‘have everything’ but as a way
to make sure that the content remains accessible. It is a
conscious decision because one knows that one day everything might be taken down. This is of course particularly
pertinent, especially since while we’re doing this interview
Sean and Marcell are being sued by a Canadian publisher.
DB

That is absolutely true and any of these websites can disappear any time. Archives like Aaaaarg, Monoskop or UbuWeb
are created by makers rather than guardians and it comes

224

LOST AND LIVING (IN) ARCHIVES

225

COPYING AS A WAY TO START SOMETHING NEW

Bibliography
Fuller, Matthew. ‘In the Paradise of Too Many Books: An Interview with
Sean Dockray’. Mute, 4 May 2011. www.metamute.org/editorial/

articles/paradise-too-many-books-interview-seandockray. Accessed 31 May 2016.
Online digital libraries
Aaaaarg, http://aaaaarg.fail.
Bibliotik, https://bibliotik.me.
Issuu, https://issuu.com.
Karagarga, https://karagarga.in.
Library Genesis / LibGen, http://gen.lib.rus.ec.
Memory of the World, https://library.memoryoftheworld.org.
Monoskop, https://monoskop.org.
Pad.ma, https://pad.ma.
Scribd, https://scribd.com.
Textz.com, https://textz.com.
UbuWeb, www.ubu.com.

226

LOST AND LIVING (IN) ARCHIVES

227

COPYING AS A WAY TO START SOMETHING NEW

Constant
Conversations
2015


This book documents an ongoing dialogue between developers and designers involved in the wider ecosystem of Libre
Graphics. Its lengthy title, I think that conversations are the
best, biggest thing that Free Software has to offer its user, is taken
from an interview with Debian developer Asheesh Laroia, Just
ask and that will be that, included in this publication. His remark points at the difference that Free Software can make when
users are invited to consider, interrogate and discuss not only
the technical details of software, but its concepts and histories
as well.
Conversations documents discussions about tools and practices
for typography, layout and image processing that stretch out
over a period of more than eight years. The questions and answers were recorded in the margins of events such as the yearly
Libre Graphics Meeting, the Libre Graphics Research Unit,
a two-year collaboration between Medialab Prado in Madrid,
Worm in Rotterdam, Piksel in Bergen and Constant in Brussels,
or as part of documenting the work process of the Brussels’
design team OSP. Participants in these intersecting events and
organisations constitute the various instances of ‘we’ and ‘I’ that
you will discover throughout this book.
The transcriptions are loosely organised around three themes:
tools, communities and design. At the same time, I invite you
to read Conversations as a chronology of growing up in Libre
Graphics, a portrait of a community gradually grasping the interdependencies between Free Software and design practice.
Femke Snelting
Brussels, December 2014

Introduction

A user should not be able to shoot himself in the foot

I think the ideas behind it are beautiful in my mind

We will get to know the machine and we will understand
ConTeXt and the ballistics of design
Meaningful transformations

Tools for a Read Write World
Etat des Lieux

Distributed Version Control

Even when you are done, you are not done
Having the tools is just the beginning
Data analysis as a discourse

Why you should own the beer company you design for
Just Ask and That Will Be That
Tying the story to data
Unicodes

If the design thinking is correct, the tools should be irrelevant
You need to copy to understand
What’s the thinking here

The construction of a book (Aether9)
Performing Libre Graphics

The Making of Conversations

7
13
23
37
47
71
84
99
109
135
155
171
187
201
213
261
275
287
297
311
319
333

Colophon

351

Keywords

353

Free Art License

359

Larisa Blazic:

Introduction

Computational concepts, their technological language and the hybridisation of creative practice have been successfully explored in Media Arts for a
few decades now. Digital was a narrative, a tool and a concept, an aesthetic
and political playground of sorts. These experiments created a notion of
the digital artisan and creative technologist on the one hand and enabled
a new view of intellectual property on the other. They widened a pathway
to participation, collaboration and co-creation in creative software development, looking critically at the software as cultural production as well as
technological advance.
This book documents conversations between artists, typographers, designers, developers and software engineers involved in Libre Graphics, an independent, self-organised, international community revolving around Free,
Libre, Open Source software (F/LOSS). Libre Graphics resembles the community of Media arts of the late twentieth Century, in so far that it is using
software as a departure point for creative exploration of design practice. In
some cases it adopts software development processes and applies them to
graphic design, using version control and platforms such as GitHub, but it
also banks on a paradigm shift that Free Software offers – an active engagement with software to bend it, fork it, reshape it – and in that it establishes
conversations with a developers community that haven’t taken place before.
This pathway was, however, at moments full of tension, created by diverging views on what the development process entails and what it might
mean. The conversations brought together in this book resulted from the
need to discuss those complex issues and to adress the differences and similarities between design, design production, Free Culture and software development. As in theatre, where it is said that conflict drives the plot forward,
so it does here. It makes us think harder about the ethics of our practices
while we develop tools and technologies for the benefit of all.
The Libre Graphics Meeting (LGM) was brought to my attention in
2012 as an interesting example of dialogue between creative types and developers. The event was running since 2006 and was originally conceived as an
annual gathering for discussions about Free and Open Source software used
in graphics. At the time I was teaching at the University of Westminster
for nearly ten years. The subject was computers, arts and design and it took
a variety of forms; sometimes focused on graphic design, sometimes on
contemporary media practice, interaction design, software design and mysterious hypermedia. F/LOSS was part of my artistic practice for many years,
7

Larisa Blazic:

Introduction

but its inclusion to the UK Higher Education was a real challenge. My
frustration with difficult computer departments grew exponentially year by
year and LGM looked like a place to visit and get much needed support.
Super fast-forward to Madrid in April 2013: I landed. Little did I know
that this journey would change everything. Firstly, the wonderfully diverse
group of people present: artists, designers, software developers, typographers, interface designers, more software developers! It was very exciting
listening to talks, overhearing conversations in breaks, observing group discussions and slowly engaging with the Libre Graphics community. Being
there to witness how far the F/LOSS community has come was so heartwarming and uplifting, that my enthusiasm was soaring.
The main reason for my attendance at the Madrid LGM was to join
the launch of a network of Free Culture aware educators in art, music and
design education. 1 Aymeric Mansoux and his colleagues from the Willem
De Kooning Academie and the Piet Zwart Institute in Rotterdam convened
the first ever meeting of the network with the aim to map out a landscape
of current educational efforts as well as to share experiences. I was aware of
Aymeric’s efforts through his activities with GOTO10 and the FLOSS+Art
book 2 that they published a couple of years before we finally met. Free
Culture was deeply embedded in his artistic and educational practice, and it
was really good to have someone like him set the course of discussion.
Lo’ and behold the conversation started – we sat in a big circle in the
middle of Medialab Prado. The introduction round began, and I thought:
there are so many people using F/LOSS in their teaching! Short courses,
long courses, BA courses, MA courses, summer schools, all sorts! There
were so many solutions presented for overcoming institutional barricades,
Adobe marriages and Apple hostages. Individual efforts and group efforts,
long term and short, a whole world of conventional curriculums as well as
a variety of educational experimentations were presented. Just sitting there,
listening about shared troubles and achievements was enough to give me a
new surge of energy to explore new strategies for engaging BA level students
with F/LOS tools and communities.
Taking part in LGM 2013 was a useful experience that has informed
my art and educational practice since. It was clear from the gathering that
1
2

http://eightycolumn.net/
Aymeric Mansoux and Marloes de Valk. FLOSS+Art. OpenMute, 2008.
http://things.bleu255.com/floss-art

8

Larisa Blazic:

Introduction

F/LOSS is not a ghetto for idealists and techno fetishists – it was ready for
an average user, it was ready for a specialist user, it was ready for all and
what is most important the communication lines were open. Given that
Linux distributions extend the life of a computer by at least ten years, in
combination with the likes of Libre Graphics, Open Video and a plethora
of other F/LOS software, the benefits are manyfold, important for all and
not to be ignored by any form of creative practice worldwide.

Libre Graphics seems to offer a very exciting transformation of graphic design practice through implementation of F/LOS software development and
production processes. A hybridisation across these often separated fields of
practice that take under consideration openness and freedom to create, copy,
manipulate and distribute, while contributing to the development of visual
communication itself. All this may lease a new life to an over-commercialised
graphic design practice, banalised by mainstream culture.
This book brings together reflections on collaboration and co-creation
in graphic design, typography and desktop publishing, but also on gender
issues and inclusion to the Libre Graphics community. It offers a paradigm
shift, supported by historical research into graphic and type design practice,
that creates strong arguments to re-engage with the tools of production.
The conversations conducted give an overview of a variety of practices and
experiences which show the need for more conversations and which can help
educate designers and developers alike. It gives detailed descriptions of the
design processes, productions and potential trade-offs when engaged in software design and development while producing designed artefacts. It points
to the importance of transparent software development, breaking stereotypes and establishing a new image of the designer-developer combo, a fresh
perspective of mutual respect between disciplines and a desire to engage in
exchange of knowledge that is beneficial beyond what any proprietary software could ever be.
Larisa Blazic is a media artist living and working in London. Her interests range from

creative collaborations to intersections between video art and architecture. As senior lecturer
at the Faculty of Media, Arts and Design of the University of Westminster, she is currently
developing a master’s program on F/LOSS art & design.

9

While in the background participants of the Libre Graphics
Meeting 2007 start saying goodbye to each other, Andreas
Vox makes time to sit down with us to talk about Scribus,
the Open Source application for professional page layout.
The software is significant not only to it’s users that do design with it, but also because Scribus helps us think about
links between software, Free Culture and design. Andreas
is a mathematician with an interest in system dynamics,
who lives and works in Lübeck, Germany. Together with
Franz Schmid, Petr Vanek (subik), Riku Leino (Tsoots),
Oleksandr Moskalenko (malex), Craig Bradney (MrB), Jean
Ghali and Peter Linnel (mrdocs) he forms the core Scribus
developer team. He has been working on Scribus since
2003 and is currently responsible for redesigning the internal workings of its text layout system.
This weekend Peter Linnel presented amongst many other new Scribus features 1 ,
‘The Color Wheel’, which at the click of a button visualises documents the way
they would be perceived by a colour blind person. Can you explain how such a
feature entered into Scribus? Did you for example speak to accessibility experts?

I don’t think we did. The code was implemented by subik 2 , a developer
from the Czech Republic. As far as I know, he saw a feature somewhere else
or he found an article about how to do this kind of stuff, and I don’t know
where he did it, but I would have to ask him. It was a logic extension of the
colour wheel functionality, because if you pick different colours, they look
different to all people. What looks like red and green to one person, might
look like grey and yellow to other persons. Later on we just extended the
code to apply to the whole canvas.
1

2

http://wiki.scribus.net/index.php/Version_1.3.4%2B-New_Features
Petr Vanek

13

It is quite special to offer such a precise preview of different perspectives in your
software. Do you think it it is particular to Scribus to pay attention to these kind
of things?

Yeah, sure. Well, the interesting thing is ... in Scribus we are not depending
on money and time like other proprietary packages. We can ask ourselves:
Is this useful? Would I have fun implementing it? Am I interested in seeing
how it works? So if there is something we would like to see, we implement
it and look at it. And because we have a good contact with our user base,
we can also pick up good ideas from them.
There clearly is a strong connection between Scribus and the world of prepress
and print. So, for us as users, it is an almost hallucinating experience that while
on one side the software is very well developed when it comes to .pdf export for
example, I would say even more developed than in other applications, but than
still it is not possible to undo a text edit. Could you maybe explain how such a
discrepancy can happen, to make us understand better?

One reason is, that there are more developers working on the project,
and even if there was only one developer, he or she would have her own
interests. Remember what George Williams said about FontForge ... 3 he is
not that interested in nice Graphical User Interfaces, he just makes his own
functionality ... that is what interests him. So unless someone else comes
up who compensates for this, he will stick to what he likes. I think that
is the case with all Open Source applications. Only if you have someone
interested and able to do just this certain thing, it will happen. And if it
is something boring or something else ... it will probably not happen. One
way to balance this, is to keep in touch with real users, and to listen to
the problems they have. At least for the Scribus team, if we see people
complaining a lot about a certain feature missing ... we will at some point
say: come on, let’s do something about it. We would implement a solution and
when we get thanks from them and make them happy, that is always nice.

Can you tell us a bit more about the reasons for putting all this work into
developing Scribus, because a layout application is quite a complex monster with
all the elements that need to work together ... Why is it important you find, to
develop Scribus?
3

I think the ideas behind it are beautiful in my mind

14

I use to joke about the special mental state you need to become a Scribus
developer ... and one part of it is probably megalomania! It is kind of mountain climbing. We just want to do it, to prove it can be done. That must
have been also true for Franz Schmid, our founder, because at that time,
when he started, it was very unlikely that he would succeed. And of course
once you have some feedback, you start to think: hey, I can do it ... it works.
People can use it, people can print with it, do things ... so why not make it even
better? Now we are following InDesign and QuarkXpress, and we are playing
the top league of page layout applications ... we’re kind of in a competition
with them. It is like climbing a mountain and than seeing the next, higher
mountain from the top.

In what way is it important to you that Scribus is Free Software?

Well ... it would not work with closed software. Open software allows you to
get other people that also are interested in working on the project involved,
so you can work together. With closed software you usually have to pay
people; I would only work because someone else wants me to do it and
we would not be as motivated. It is totally different. If it was closed, it
would not be fun. In Germany they studied what motivates Open Source
developers, and they usually list: ‘fun’; they want to do something more
challenging than at work, and some social stuff is mentioned as well. Of
course it is not money.
One of the reasons the Scribus project seems so important to us, is that it might
draw in other kinds of users, and open up the world of professional publishing to
people who can otherwise not afford proprietary packages. Do you think Scribus
will change the way publishing works? Does that motivate you, when you work
on it?

I think the success of Open Source projects will also change the way people
use software. But I do not think it is possible to foresee or plan, in what
way this will change. We see right now that Scribus is adopted by all kinds
of idealists, who think that is interesting, lets try how far we can go, and
do it like that. There are other users that really just do not have the money
to pay for a professional page layout application such as very small newspapers associations, sports groups, church groups. They use Scribus because
otherwise they would have used a pirated copy of some other software, or
15

another application which is not up to that task, such as a normal word processor. Or otherwise they would have used a deficient application like MS
Publisher to do it. I think what Scribus will change, is that more people
will be exposed to page layout, and that is a good thing, I think.

In another interview with the Scribus team 4 , Craig Bradney speaks about the
fact that the software is often compared with its proprietary competition. He
brings up the ‘Scribus way of doing things’. What do you think is ‘The Scribus
Way’?

I don’t think Craig meant it that way. Our goal is to produce good output,
and make that easy for users. If we are in doubt, we think for example:
InDesign does this in quite an OK way, so we try to do it in a similar way;
we do not have any problems with that. On the other hand ... I told you a
bit about climbing mountains ... We cannot go from the one top to the next
one just in one step. We have to move slowly, and have to find our ways and
move through valleys and that sometimes also limits us. I can say: I want it
this way but then it is not possible now, it might be on the roadmap, but we
might have to do other things first.

When we use Scribus, we actually thought we were experiencing ‘The Scribus
Way’ through how it differences from other layout packages. First of all, in
Scribus there is a lot more attention for everything that happens after the layout
is done, i.e. export, error checking etc. and second, working with the text editor
is clearly the preferred way of doing layout. For us it links the software to a more
classic ways of doing design: a strictly phased process where a designer starts with
writing typographic instructions which are carried out by a typesetter, after which
the designer pastes everything into the mock-up. In short: it seems easier to do a
magazine in Scribus, than a poster. Do you recognize that image?
That is an interesting thought, I have never seen it that way before. My
background is that I did do a newspaper, magazine for a student group, and
we were using PageMaker, and of course that influenced me. In a small
group that just wants to bring out a magazine, you distribute the task of
writing some articles, and usually you have only one or two persons who are
capable of using a page layout application. They pull in the stories and make
some corrections, and then do the layout. Of course that is a work flow I am
4

http://www.kde.me.uk/index.php?page=fosdem-interview-scribus

16

familiar with, and I don’t think we really have poster designers or graphic
artists in the team. On the other hand ... we do ask our users what they
think should be possible with Scribus and if a functionality is not there, we
ask them to put in a bug report so we do not forget it and some time later
we will pick it up and implement it. Especially the possibility to edit from
the canvas, this will approve in the upcoming versions.
Some things we just copied from other applications. I think Franz 5 had no
previous experience with PageMaker, so when I came to Scribus, and saw
how it handled text chains, I was totally dismayed and made some changes
right away because I really wanted it to work the way it works in PageMaker,
that is really nice. So, previous experience and copying from another applications was one part of the development. Another thing is just technical
problems. Scribus is at the moment internally not that well designed, so we
first have to rewrite a lot of code to be able to reach some elements. The
coding structure for drawing and layout was really cumbersome inside and
it was difficult to improve. We worked with 2.500 lines of code, and there
were no comments in between. So we broke it down in several elements,
put some comments in and also asked Franz: why did you did this or that, so
we could put some structure back into the code to understand how it works.
There is still a lot of work to be done, and we hope we can reach a state
where we can implement new stuff more easily.
It is interesting how the 2.500 lines of code are really tangible when you use
Scribus old-style, even without actually seeing them. When Peter Linnel was
explaining how to make the application comply to the conservative standards of
the printing business, he used this term ‘self-defensive code’ ...
At Scribus we have a value that a file should never break in a print shop.
Any bug report we receive in this area, is treated with first priority.

We can speak from experience, that this is really true! But this robustness shifts
out of sight when you use the inbuilt script function; then it is as if you come
in to the software through the backdoor. From self-defence to the heart of the
application?

It is not really self-defence ... programmers and software developers sometimes use the expression: ‘a user should not shoot himself in the foot’.
5

Schmid

17

Scribus will not protect you from ugly layout, if that would be possible at
all! Although I do sometimes take deliberate decisions to try and do it ...
for example that for as long as I am around, I will not make an option to
do ‘automatic letter spacing’, because I think it is just ugly. If you do it
manually, that is your responsibility; I just do not feel like making anything
like that work automatically. What we have no problems with, is to prevent
you from making invalid output. If Scribus thinks a certain font is not OK,
and it might break on one or two types of printers ... this is reason enough
for us to make sure this font is not used. The font is not even used partially,
it is gone. That is the kind of self-defence Peter Linnel was talking about.
It is also how we build .pdf files and PostScript. Some ways of building
PostScript take less storage, some of it would be easier to read for humans,
but we always take an approach that would be the least problematic in a
print shop. This meant for example, that you could not search in a .pdf. 6
I think you can do that now, but there are still limitations; it is on the
roadmap to improve over time, to even add an option to output a web oriented .pdf and a print oriented .pdf ... but it is an important value in Scribus
is to get the output right. To prevent people to really shoot themselves in
the foot.

Our last question is about the relation between the content that is layed out
in Scribus, and the fact that it is an Open Source project. Just as an example,
Microsoft Word will come out with an option to make it easy to save a document
with a Creative Commons License 7 . Would this, or not, be an interesting option
to add to Scribus? Would you be interested in making that connection, between
software and content?
It could well be we would copy that, if it is not already been patented by
Microsoft! To me it sounds a bit like a marketing trick ... because it is such
an easy function to do. But, if someone from Creative Commons would ask
for this function, I think someone would implement it for Scribus in a short
time, and I think we would actually like it. Maybe we would generalize it a
little, so that for example you could also add other licenses too. We already
have support for some meta data, and in the future we might put some more
function in to support license managing, for example also for fonts.
6
7

because the fonts get outlined and/or reencoded
http://creativecommons.org/press-releases/entry/5947

18

About the relation between content and Open Source software in general
... there are some groups who are using Scribus I politically do not really
identify with. Or more or less not at all. If I meet those people on the IRC
chat, I try to be very neutral, but I of course have my own thoughts in the
back of my head.

Do you think using a tool like Scribus produces a certain kind of use?

No. Preferences for work tools and political preference are really orthogonal,
and we have both. For example when you have some right wing people they
could also enjoy using Scribus and socialist groups as well. It is probably the
best for Scribus to keep that stuff out of it. I am not even sure about the
political conviction of the other developers. Usually we get along very well,
but we don’t talk about those kinds of things very much. In that sense I
don’t think that using Scribus will influence what is happening with it.
As a tool, because it makes creating good page layouts much easier, it will
probably change the landscape because a lot of people get exposed to page
layout and they learn and teach other people; and I think that is growing,
and I hope it will be growing faster than if it is all left to big players like
InDesign and Quark ... I think this will improve and it will maybe also
change the demands that users will make for our application. If you do page
layout, you get into a new frame of mind ... you look in a different way at
publications. It is less content oriented, but more layout oriented. You will
pick something up and it will spread. People by now have understood that
it is not such a good idea to use twelve different fonts in one text ... and I
think that knowledge about better page layout will also spread.

19

When we came to the Libre Graphics Meeting
for the first time in 2007, we recorded this rare
conversation with George Williams, developer of
FontForge, the editing tool for fonts. We spoke
about Shakespeare, Unicode, the pleasure of making beautiful things, and pottery.
We‘re doing these interviews, as we’re working as designers on Open Source
OK.

With Open Source tools, as typographers, but often when we speak to
developers they say well, tell me what you want, or they see our interest in
what they are doing as a kind of feature request or bug report.

(laughs) Yes.

Of course it’s clear that that’s the way it often works, but for us it’s also
interesting to think about these tools as really tools, as ways of shaping
work, to try and understand how they are made or who is making them.
It can help us make other things. So this is actually what we want to talk
about. To try and understand a bit about how you’ve been working on
FontForge. Because that’s the project you’re working on.

OK.

And how that connects to other ideas of tools or tools’ shape that you
make. These kind of things. So maybe first it’s good to talk about what
it is that you make.

OK. Well ... FontForge is a font editor.
I started playing with fonts when I bought my first Macintosh, back in the
early eighties (actually it was the mid-eighties) and my father studied textual bibliography and looked at the ways the printing technology of the
Renaissance affected the publication of Shakespeare’s works. And what that
meant about the errors in the compositions we see in the copies we have
left from the Renaissance. So my father was very interested in Renaissance
printing (and has written books on this subject) and somehow that meant
23

that I was interested in fonts. I’m not quite sure how that connection happened, but it did. So I was interested in fonts. And there was this program
that came out in the eighties called Fontographer which allowed you to create PostScript 1 and later TrueType 2 fonts. And I loved it. And I made lots
of calligraphic fonts with it.

You were ... like 20?

I was 20~30. Lets see, I was born in 1959, so in the eighties I was in my
twenties mostly. And then Fontographer was bought up by Macromedia 3
who had no interest in it. They wanted FreeHand 4 which was done by
the same company. So they dropped Fon ... well they continued to sell
Fontographer but they didn’t update it. And then OpenType 5 came out and
Unicode 6 came out and Fontographer didn’t do this right and it didn’t do
that right ... And I started making my own fonts, and I used Fontographer
to provide the basis, and I started writing scripts that would add accents to
latin letters and so on. And figured out the Type1 7 format so that I could
decompose it — decompose the Fontographer output so that I could add
1
2
3
4
5

6
7

PostScript fonts are outline font specifications developed by Adobe Systems for professional
digital typesetting, which uses PostScript file format to encode font information.
Wikipedia. PostScript fonts — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

TrueType is an outline font standard developed by Apple and Microsoft in the late 1980s as a
competitor to Adobe’s Type 1 fonts used in PostScript.
Wikipedia. TrueType — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

Macromedia was an American graphics, multimedia and web development software company
(1992–2005). Its rival, Adobe Systems, acquired Macromedia on December 3, 2005.
Wikipedia. Macromedia — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

Adobe FreeHand (formerly Macromedia Freehand) is a computer application for creating
two-dimensional vector graphics. Adobe discontinued development and updates to the
program. Wikipedia. Adobe FreeHand — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]
OpenType is a format for scalable computer fonts. It was built on its predecessor TrueType,
retaining TrueType’s basic structure and adding many intricate data structures for prescribing
typographic behavior. Wikipedia. Opentype — wikipedia, the free encyclopedia, 2014. [Online; accessed 18.12.2014]
Unicode is a computing industry standard for the consistent encoding, representation, and
handling of text expressed in most of the world’s writing systems.
Wikipedia. Unicode — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

Type 1 is a font format for single-byte digital fonts for use with Adobe Type Manager
software and with PostScript printers. It can support font hinting. It was originally a
proprietary specification, but Adobe released the specification to third-party font
manufacturers provided that all Type 1 fonts adhere to it.
Wikipedia. PostScript fonts — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

24

my own things to it. And then Fontographer didn’t do Type0 8 PostScript
fonts, so I figured that out.
And about this time, the little company I was working for, a tiny little
startup — we wrote a web HTML editor — where you could sit at your
desk and edit pages on the web — it was before FrontPage 9 , but similar to
FrontPage. And we were bought by AOL and then we were destroyed by
AOL, but we had stock options from AOL and they went through the roof.
So ... in the late nineties I quit. And I didn’t have to work.
And I went off to Madagascar for a while to see if I wanted to be a primatologist. And ... I didn’t. There were too many leaches in the rainforest.

(laughs)

So I came back, and I wrote a font editor instead.
And I put it up on the web and in late 99, and within a month someone
gave me a bug report and was using it.
(laughs) So it took a month

Well, you know, there was no advertisement, it was just there, and someone
found it and that was neat!
(laughs)

And that was called PfaEdit (because when it began it only did PostScript)
and I ... it just grew. And then — I don’t know — three, four, five years ago
someone pointed out that PfaEdit wasn’t really appropriate any more, so I
asked various users what would be a good name and a french guy said How
’bout FontForge? So. It became FontForge then. — That’s a much better
name than PfaEdit.

(laughs)

Used it ever since.

But your background ... you talked about your father studying ...
8
9

Type 0 is a ‘composite’ font format . A composite font is composed of a high-level font that
references multiple descendent fonts.
Wikipedia. PostScript fonts — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

Microsoft FrontPage is a WYSIWYG HTML editor and Web site administration tool from
Microsoft discontinued in December 2006.
Wikipedia. Microsoft FrontPage — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

25

I grew up in a household where Shakespeare was quoted at me every day,
and he was an English teacher, still is an English teacher, well, obviously
retired but he still occasionally teaches, and has been working for about 30
years on one of those versions of Shakespeare where you have two lines of
Shakespeare text at the top and the rest of the page is footnotes. And I went
completely differently and became a mathematician and computer scientist
and worked in those areas for almost twenty years and then went off and
tried to do my own things.

So how did you become a mathematician?
(pause) I just liked it.
(laughs) just liked it

I was good at it. I got pushed ahead in high school. It just never occurred
to me that I’d do anything else — until I met a computer. And then I still
did maths because I didn’t think computers were — appropriate — or — I
was a snob. How about that.

(laughs)

But I spent all my time working on computers as I went through university.
And then got my first job at JPL 10 and shortly thereafter the shuttle 11
blew up and we had some — some of our experiments — my little group
— flew on the shuttle and some of them flew on an airplane which went
over the US took special radar pictures of the US. We also took special radar
pictures of the world from the shuttle (SIR-A, SIR-B, SIR-C). And then
our airplane burned up. And JPL was not a very happy place to work after
that. So then I went to a little company with some college friends of mine,
that they’d started, created compilers and debuggers — do you know what
those are?
Mm-hmm.

And I worked a long time on that, and then the internet came out and found
another little company with some friends — and worked on HTML.
10
11

Jet Propulsion Laboratory
The Space Shuttle Challenger disaster occurred on January 28, 1986, when the NASA Space
Shuttle orbiter Challenger broke apart 73 seconds into its flight, leading to the deaths of its
seven crew members.
Wikipedia. Space Shuttle Challenger disaster — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

26

So when, before we moved, I was curious about, I wanted you to talk
about a Shakespearian influence on your interest in fonts. But on the
other hand you talk about working in a company where you did HTML
editors at the time you actually started, I think. So do you think that
is somehow present ... the web is somehow present in your — in how
FontForge works? Or how fonts work or how you think about fonts?

I don’t think the web had much to do with my — well, that’s not true.
OK, when I was working on the HTML editor, at the time, mid-90s, there
weren’t any Unicode fonts, and so part of the reason I was writing all these
scripts to add accents and get Type0 support in PostScript (which is what
you need for a Unicode font) was because I needed a Unicode font for our
HTML product.
To that extent — yes-s-s-s.
It had an effect. Aside from that, not really.
The web has certainly allowed me to distribute it. Without the web I doubt
anyone would know — I wouldn’t have any idea how to ‘market’ it. If that’s
the right word for something that doesn’t get paid for. And certainly the
web has provided a convenient infrastructure to do the documentation in.
But — as for font design itself — that (the web) has certainly not affected
me.
Maybe with this creative commons talk that Jon Phillips was giving, there
may be, at some point, a button that you can press to upload your fonts to
the Open Font Library 12 — but I haven’t gotten there yet, so I don’t want
to promise that.
(laughs) But no, indeed there was – hearing you speak about ccHost 13 –
that’s the ...

Mm-hmm.

... Software we are talking about?

That’s what the Open Font Library uses, yes.
12
13

Open Font Library is a project devoted to the hosting and encouraged creation of fonts
released under Free Licenses.
Wikipedia. Open Font Library — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

ccHost is a web-based media hosting engine upon which Creative Commons’ ccMixter remix
web community is built. Wikipedia. CcHost — Wikipedia, The Free Encyclopedia, 2012. [Online; accessed 18.12.2014]

27

Yeah. And a connection to FontForge could change the way, not only
how you distribute fonts, but also how you design fonts.

It — it might. I don’t know ... I don’t have a view of the future.
I guess to some extent, obviously font design has been affected by requiring
it (the font) to be displayed on a small screen with a low resolution display.
And there are all kinds of hacks in modern fonts formats for dealing with
low resolution stuff. PostScript calls them hints and TrueType calls them
instructions. They are different approaches to the same thing. But that,
that certainly has affected font design in the last — well since PostScript
came out.
The web itself? I don’t think that has yet been a significant influence on
font design, but then — I’m no longer a designer. I discovered I was much
better at designing font editors than at designing fonts.
So I’ve given up on that aspect of things.
Mm-K, because I’m curious about your making a division about being a
designer, or being a font-editor-maker, because for me that same definition of maker, these two things might be very related.

Well they are. And I only got in to doing it because the tools that were
available to me were not adequate. But I have found since — that I’m
not adequate at doing the design, there are many people who are better at
designing — designing fonts, than I am. And I like to design fonts, but I
have made some very ugly ones at times.
And so I think I will — I’ll do that occasionally, but that’s not where I’m
going to make a mark.
Mostly now —
I just don’t have the —
The font editor itself takes up so much of time that I don’t have the energy,
the enthusiasm, or anything like that to devote to another major creative
project. And designing a font is a major creative project.
Well, can we talk about the major creative project of designing a font
editor? I mean, because I’m curious how — how that is a creative project
for you — how you look at that.

I look at it as a puzzle. And someone comes up to me with a problem, and I
try and figure out how to solve it. And sometimes I don’t want to figure out
28

how to solve it. But I feel I should anyway. And sometimes I don’t want to
figure out how to solve it and I don’t.
That’s one of the glories of being one’s own boss, you don’t have to do
everything that you are asked.
But — to me — it’s just a problem. And it’s a fascinating problem. But
why is it fascinating? — That’s just me. No one else, probably, finds
it fascinating. Or — the guys who design FontLab probably also find it
fascinating, there are two or three other font design programs in the world.
And they would also find it fascinating.

Can you give an example of something you would find fascinating?

Well. Dave Crossland who was sitting behind me at the end was talking
to me today — he sat down — we started talking after lunch but on the
way up the stairs — at first he was complaining that FontForge isn’t written
with a standard widget set. So it looks different from everything else. And
yes, it does. And I don’t care. Because this isn’t something which interests
me.
On the other hand he was saying that what he also wanted was a paragraph
level display of the font. So that as he made changes in the font he could
see a ripple effect in the paragraph.
Now I have a thing which does a word level display, but it doesn’t do multilines. Or it does multi-lines if you are doing Japanese (vertical writing mode)
but it doesn’t do multi-columns then. So it’s either one vertical row or one
horizontal row of glyphs.
And I do also have a paragraph level display, but it is static. You bring
it up and it takes the current snapshot of the font and it generates a real
TrueType font and pass it off to the X Window 14 rasterizer — passes it off
to the standard Linux toolchain (FreeType) as that static font and asks that
toolchain to display text.
So what he’s saying is OK, do that, but update the font that you pass off every
now and then. And Yeah, that’d be interesting to do. That’s an interesting project
to work on. Much more interesting than changing my widget set which is
just a lot of work and tedious. Because there is nothing to think about.
It’s just OK, I’ve got to use this widget instead of my widget. My widget does

14

The X Window System is a windowing system for bitmap displays, common on UNIX-like
computer operating systems. X provides the basic framework for a GUI environment:
drawing and moving windows on the display device and interacting with a mouse and
keyboard. Wikipedia. X Window System — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

29

exactly what I want — because I designed it that way — how do I make this
thing, which I didn’t design, which I don’t know anything about, do exactly
what I want?
And — that’s dull. For me.

Yeah, well.

Dave, on the other hand, is very hopeful that he’ll find some poor fool
who’ll take that on as a wonderful opportunity. And if he does, that would
be great, because not having a standard widget set is one of the biggest
complaints people have. Because FontForge doesn’t look like anything else.
And people say Well the grey background is very scary. 15
I thought it was normal to have a grey background, but uh ... that’s why we
now have a white background. A white background may be equally scary,
but no one has complained about it yet.

Try red.

I tried light blue and cream. One of them I was told gave people migraines
— I don’t remember specifically what the comment was about the light
blue, but

(someone from inkscape): Make it configurable.

Oh, it is configurable, but no one configures it.

(someone from inkscape): Yeah, I know.

So ...

So, you talked about spending a lot of time on this project, how does that
work, you get up in the morning and start working on FontForge? Or ...
Well, I do many things. Some mornings, yes, I get up in the morning and I
start working on FontForge and I cook breakfast in the background and eat
breakfast and work on FontForge. Some mornings I get up at four in the
morning and go out running for a couple of hours and come back home and
sort of collapse and eat a little bit and go off to yoga class and do a pilates
class and do another yoga class and then go to my pottery class, and go to
the farmers’ market and come home and I haven’t worked on FontForge at
all. So it varies according to the day. But yes I ...
15

It used to have a grey background, now it has a white background

30

There was a period where I was spending 40, 50 hours a week working
on FontForge, I don’t spend that much time on it now, it’s more like 20
hours, though the last month I got all excited about the release that I put
out last Tuesday — today is Sunday. And so I was working really hard —
probably got up to — oh — 30 hours some of that time. I was really excited
about the change. All kinds of things were different — I put in Python
scripting, which people had been asking for — well, I’m glad I’ve done it,
but it was actually kind of boring, that bit — the stuff that came before was
— fascinating.

Like?

I — are you familiar with the OpenType spec? No. OK. The way you ...
the way you specify ligatures and kerning in OpenType can be looked at at
several different levels. And the way OpenType wants you to look at it, I
felt, was unnecessarily complicated. So I didn’t look at it at that level. And
then after about 5 years of looking at it that way I discovered that the reason
I thought it was unnecessarily complicated was because I was only used to
Latin or Cyrillic or Greek text, and for Latin, Cyrillic or Greek, it probably
is unnecessarily complicated. But for Indic scripts it is not unnecessarily
complicated, and you need all those things. So I ripped out all of the code
for specifying strange glyph conversions. You know in Arabic a character
looks different at the beginning of a word and so on? So that’s also handled
in this area. And I ripped all that stuff out and redid it in the way that
OpenType wanted it to be done and not the somewhat simplified but not
sufficiently powerful method that I’d been using up until then.
And that I found, quite fascinating.
And once I’d done that, it opened up all kinds of little things that I could
change that made the font editor itself bettitor. Better. Bettitor?

(laughs) That’s almost Dutch.

And so after I’d done that the display I talked about which could show a
word — I realized that I should redo that to take advantage of what I had
done. And so I redid that, and it’s now, it’s now much more usable. It now
shows — at least I hope it shows — more of what people want to see when
they are working with these transformations that apply to the font, there’s
now a list of the various transformations, that can be enabled at any time
and then it goes through and does them — whereas before it just sort of —
31

well it did kerning, and if you asked it to it would substitute this glyph so
you could see what it would look like — but it was all sort of — half-baked.
It wasn’t very elegant.
And — it’s much better now, and I’m quite proud of that.
It may crash — but it’s much better.

So you bring up half-baked, and when we met we talked about bread
baking.

Oh, yes.

And the pleasure of handling a material when you know it well. Maybe
make reliable bread — meaning that it comes out always the same way,
but by your connection to the material you somehow — well — it’s a
pleasure to do that. So, since you’ve said that, and we then went on
talking about pottery — how clay might be of the same — give the same
kind of pleasure. I’ve been trying to think — how does FontForge have
that? Does it have that and where would you find it or how is the ...
I like to make things. I like to make things that — in some strange
definition are beautiful. I’m not sure how that applies to making bread,
but my pots — I think I make beautiful pots. And I really like the glazing I
put onto them.
It’s harder to say that a font editor is beautiful. But I think the ideas behind
it are beautiful in my mind — and in some sense I find the user interface
beautiful. I’m not sure that anyone else in the world does, because it’s what
I want, but I think it’s beautiful.
And there’s a satisfaction in making something — in making something
that’s beautiful. And there’s a satisfaction too (as far as the bread goes) in
making something I need. I eat my own bread — that’s all the bread I eat
(except for those few days when I get lazy and don’t get to make bread that
day and have to put it off until the next day and have to eat something that
day — but that doesn’t happen very often).
So it’s just — I like making beautiful things.

OK, thank you.
Mm-hmm.

That was very nice, thank you very much.

Thank you. I have pictures of my pots if you’d like to see them?
Yes, I would very much like to see them.
32

This conversation with Juliane de Moerlooze was recorded in March 2009.

When you hear people talk about women having more sense
for the global, intuitive and empathic ... and men are more
logical ... even if it is true ... it seems quite a good thing to
have when you are doing math or software?

Juliane is a Brussels based computer scientist, feminist
and Linux user from the beginning. She studied math,
programming and system administration and participates in Samedies. 1 In February 2009 she was voted
president of the Brussels Linux user group (BXLug).

I will start at the end ... you have recently become president of the BXLug. Can
you explain to us what it is, the BXLug?
It is the Brussels Linux user group, a group of Linux users who meet
regularly to really work together on Linux and Free Software. It is the most
active group of Linux users in the French speaking part of Belgium.

How did you come into contact with this group?

That dates a while back. I have been trained in Linux a long time ago ...
Five years? Ten years? Twenty years?

Almost twenty years ago. I came across the beginnings of Linux in 1995 or
1996, I am not sure. I had some Slackware 2 installed, I messed around with
friends and we installed everything ... then I heard people talk about Linux
distributions 3 and decided to discover something else, notably Debian. 4
1
2
3
4

Femmes et Logiciels Libres, group of women maintaining their own server
http://samedi.collectifs.net
one of the earliest Linux distributions
a distribution is a specific collection of applications and a software kernel
one of the largest Linux distributions

37

It is good to know that with Linux you really have a diversity, there are
distributions specially for audio, there are distributions for the larger public
with graphical interfaces, there are distributions that are a bit more ‘geek’,
in short you find everything: there are thousands of distributions but there
are a few principal ones and I heard people talk about an interesting development, which was Debian. I wanted to install it to see, and I discovered
the BXLug meetings, and so I ended up there one Sunday.

What was your experience, the first time you went?

(laughs) Well, it was clear that there were not many women, certainly not. I
remember some sessions ...
What do you mean, not many women? One? Or five?

Usually I was there on my own. Or maybe two. There was a time that we
were three, which was great. There was a director of a school who pushed
Free Software a lot, she organised real ’Journées du Libre’ 5 at her school,
to which she would invite journalists and so on. She was the director but
when she had free time she would use it to promote Free Software, but
I haven’t seen her in a while and I don’t know what happened since. I
also met Faty, well ... I wasn’t there all the time either because I had also
other things to do. There was a friendly atmosphere, with a little bar where
people would discuss with each other, but many were cluttered together in
the middle of the room, like autists hidden behind their computers, without
much communication. There were other members of the group who like me
realised that we were humans that were only concentrating on our machines
and not much was done to make new people feel welcome. Once I realised,
I started to move to the back of the room and say hello to people arriving.
Well, I was not the only one who started to do that but I imagine it might
have felt like a closed group when you entered for the first time. I also
remember in the beginning, as a girl, that ... when people asked questions
... nobody realised that I was actually teaching informatics. It seemed there
was a prejudice even before I had a chance to answer a question. That’s a
funny thing to remember.
Could you talk about the pleasure of handling computers? You might not be the
kind of person that loses herself in front of her computer, but you have a strong
5

Journées du Libre is a yearly festival organised by the BXLug

38

relationship with technology which comes out when you open up the commandline
... there’s something in you that comes to life.

Oh, yes! To begin with, I am a mathematician (‘matheuse’), I was a math
teacher, and I have been programming during my studies and yes, there
was something fantastic about it ... informatics for me is all about logic, but
logic in action, dynamic logic. A machine can be imperfect, and while I’m
not specialised in hardware, there is a part on which you can work, a kind
of determinism that I find interesting, it poses challenges because you can
never know all, I mean it is not easy to be a real system administrator that
knows every detail, that understands every problem. So you are partially in
the unknown, and discovering, in a mathematical world but a world that
moves. For me a machine has a rhythm, she has a cadence, a body, and her
state changes. There might be things that do not work but it can be that
you have left in some mistakes while developing etcetera, but we will get
to know the machine and we will understand. And after, you might create
things that are maybe interesting in real life, for people that want to write
texts or edit films or want to communicate via the Internet ... these are all
layers one adds, but you start ... I don’t know how to say it ... the machine is
at your service but you have to start with discovering her. I detest the kind
of software that asks you just to click here and there and than it doesn’t
work, and than you have to restart, and than you are in a situation where
you don’t have the possibility to find out where the problem is.
When it doesn’t show how it works?

For me it is important to work with Free Software, because when I have
time, I will go far, I will even look at the source code to find out what’s
wrong with the interface. Luckily, I don’t have to do this too often anymore
because software has become very complicated, twenty years later. But we
are not like persons with machines that just click ... I know many people,
even in informatics, who will say ‘this machine doesn’t work, this thing
makes a mistake’

The fact that Free Software proposes an open structure, did that have anything
to do with your decision to be a candidate for BXLug?
Well, last year I was already very active and I realised that I was at a point
in my life that I could use informatics better, and I wanted to work in this
39

field, so I spent much time as a volunteer. But the moment that I decided,
now this is enough, I need to put myself forward as a candidate, was after a
series of sexist incidents. There was for example a job offer on the BXLug
mailing list that really needed to be responded to ... I mean ... what was
that about? To be concrete: Someone wrote to the mailing list that his
company was looking for a developer in so and so on and they would like
a Debian developer type applying, or if there weren’t any available, it would
be great if it would be a blond girl with large tits. Really, a horrible thing so
I responded immediately and than it became even worse because the person
that had posted the original message, sent out another one asking whether
the women on the list were into castration and it took a large amount of
diplomacy to find a way to respond. We discussed it with the Samediennes 6
and I though about it ... I felt supported by many people that had well
understood that this was heavy and that the climate was getting nasty but
in the end I managed to send out an ironic message that made the other
person excuse himself and stop these kind of sexist jokes, which was good.
And after that, there was another incident, when the now ex-president of
the group did a radio interview. I think he explained Free Software relatively
well to a public that doesn’t know about it, but as an example how easy it is
to use Free Software, he said even my wife, who is zero with computers, knows
how it works, using the familiar cliché without any reservation. We discussed
this again with the Samediennes, and also internally at the BXLug and than
I thought: well, what is needed is a woman as president, so I need to present
myself. So it is thanks to the Samedies, that this idea emerged, out of the
necessity to change the image of Free Software.

In software and particularly in Free Software, there are relatively few women
participating actively. What kinds of possibilities do you see for women to enter?
It begins already at school ... all the clichés girls hear ... it starts there. We
possibly have a set of brains that is socially constructed, but when you hear
people talk about women having more sense for the global, intuitive and
empathic ... and men are more logic ... even if it is true ... it seems quite a
good thing to have when you are doing math or software? I mean, there is
no handicap we start out with, it is a social handicap ... convincing girls to
become a secretary rather than a system administrator.
6

Participants in the Samedies: Femmes et logiciels libres (http://www.samedies.be)

40

I am assuming there is a link between your feminism and your engagement with
Free Software ...

It is linked at the point where ... it is a political liaison which is about reappropriating tools, and an attempt to imagine a political universe where we
are ourselves implicated in the things we do and make, and where we collectively can discuss this future. You can see it as something very large, socially,
and very idealist too. You should also not idealise the Free Software community itself. There’s an anthropologist who has made a proper description 7 ...
but there are certainly relational and organisational problems, and political
problems, power struggles too. But the general idea ... we have come to the
political point of saying: we have technologies, and we want to appropriate
them and we will discuss them together. I feel I am a feminist ... but I know
there are other kinds of feminism, liberal feminism for example, that do not
want to question the political economical status quo. My feminism is a bit
different, it is linked to eco-feminism, and also to the re-appropriation of
techniques that help us organise as a group. Free Software can be ... well,
there is a direction in Free Software that is linked to ‘Free Enterprise’ and
the American Dream. Everything should be possible: start-ups or pin-ups,
it doesn’t matter. But for me, there is another branch much more ‘libertaire’
and left-wing, where there is space for collective work and where we can ask
questions about the impact of technology. It is my interest of course, and I
know well that even as president of the BXLug I sometimes find myself on
the extreme side, so I will not speak about my ‘libertaire’ ideas all the time
in public, but if anyone asks me ... I know well what is at stake but it is not
necessarily representative of the ideas within the BXLug.

Are their discussions between members, about the varying interests in Free Software?
I can imagine there are people more excited about efficiency and performativity
of these tools, and others attracted by it’s political side.
Well, these arguments mix, and also since some years there is unfortunately
less of a fundamental discussion. At the moment I have the impression that
we are more into ‘things to do’ when we meet in person. On the mailing
list there are frictions and small provocations now and then, but the really
interesting debates are over, since a few years ... I am a bit disappointed in
7

Christophe Lazarro. La liberté logicielle. Une ethnographie des pratiques d’échange et de
coopération au sein de la communauté Debian. Academia editons, 2008

41

that, actually. But it is not really a problem, because I know other groups
that pose more interesting questions and with whom I find it more interesting to have a debate. Last year we have been working away like small busy
bees, distributing the general idea of Free Software with maybe a hint to the
societal questions behind but in fact not marking it out as a counterweight
to a commercialised society. We haven’t really deepened the problematics,
because for me ... it is clear that Free Software has won the battle, they have
been completely recuperated by the business world, and now we are in a
period where tendencies will become clear. I have the impression that with
the way society is represented right now ... where they are talking about the
economical crisis ... and that we are becoming a society of ‘gestionnaires’
and ideological questions seem not very visible.
So do you think it is more or less a war between two tendencies, or can both
currents coexist, and help each other in some way?

The current in Free Software that could think about resistance and ask
political questions and so on, does not have priority at the moment. But
what we can have is debates and discussions from person to person and we
can interpolate members of the BXLug itself, who really sometimes start to
use a kind of marketing language. But it is relational ... it is from person
to person. At the moment, what happens on the level of businesses and
society, I don’t know. I am looking for a job and I see clearly that I will
need to accept the kinds of hierarchies that exist but I would like to create
something else. The small impact a group like BXLug can make ... well,
there are several small projects, such as the one to develop a distribution
specifically designed for small organisations, to which nobody could object
of course. Different directions coexist, because there is currently not any
project with enough at stake that it would shock the others.
To go once again from a large scale to a small scale ... how would you describe
your own itinerary from mathematics to working on and with software?

I did two bachelors at the University Libre de Bruxelles, and than I studied
to become a math teacher. I had a wonderful teacher, and we were into
the pleasure of exercising our brains, and discovering theory but a large part
of our courses were concentrated on pedagogy and how to become a good
teacher, how to open up the mind of a student in the context of a course.
That’s when I discovered another pleasure, of helping a journey into a kind
42

of math that was a lot more concrete, or that I learned to render concrete.
One of the difficult subjects you need to teach in high schools, is scales and
plans. I came up with a rendering of a submarine and all students, boys as
well as girls, were quickly motivated, wanting to imagine themselves at the
real scale of the vessel. I like math, because it is not linked to a pre-existing
narrative structure, it is a theoretical construct we accept or not, like the
rules of a game. For me, math is an ideal way to form a critical mind.
When you are a child, math is fundamentally fiction, full stop. I remember
that when I learned modern math at school ... I had an older teacher, and
she wasn’t completely at ease with the subject. I have the impression that
because of this ... maybe it was a question of the relation between power and
knowledge ... she did not arrive with her knowledge all prepared, I mean it
was a classical form of pedagogy, but it was a new subject to her and there
was something that woke up in me, I felt at ease, I followed, we did not go
too fast ...
It was open knowledge, not already formed and closed?

Well, we discovered the subject together with the teacher. It might sound
bizarre, and she certainly did not do this on purpose, but I immediately felt
confident, which did not have too much to do with the subject of the class,
but with the fact that I felt that my brains were functioning.
I still prefer to discover the solution to a mathematical problem together
with others. But when it comes to software, I can be on my own. In
the end it is me, who wants to ask myself: why don’t I understand? Why
don’t I make any progress? In Free Software, there is the advantage of
having lots of documentation and manuals available online, although you
can almost drown in it. For me, it is always about playing with your brain,
there is at least always an objective where I want to arrive, whether it is
understanding theory or software ... and in software, it is also clear that you
want something to work. There is a constraint of efficiency that comes in
between, that of course somehow also exists in math, but in math when you
have solved a problem, you have solved it on a piece of paper. I enjoy the
game of exploring a reality, even if it is a virtual one.

43

In September 2013 writer, developer, freestyle rapper and
poet John Haltiwanger joined the ConTeXt user meeting in
Brejlov (Czech Republic) 1 to present his ideas on Subtext,
‘A Proposed Processual Grammar for a Multi-Output PreFormat’. The interview started as a way to record John’s
impressions fresh from the meeting, but moved into discussing the future of layout in terms of ballistics.

How did you end up going to the ConTeXt meeting? Actually, where was it?

It was in Brejlov, which apparently might not even be a town or city. It
might specifically be a hotel. But it has its own ... it’s considered a location,
I guess. But arriving was already kind of a trick, because I was under the
impression there was a train station or something. So I was asking around:
Where is Brejlov? What train do I take to Brejlov? But nobody had any clue,
that this was even something that existed. So that was tricky. But it was really a beautiful venue. How I ended up at the conference specifically? That’s
a good question. I’m not an incredibly active member on the ConTeXt
mailing list, but I pop up every now and again and just kind of express a
few things that I have going on. So initially I mentioned my thesis, back in
January or maybe March, back when it was really unformulated. Maybe it
was even in 2009. But I got really good responses from Hans. 2 Originally,
when I first got to the Netherlands in 2009 in August, the next weekend
was the third annual ConTeXt meeting. I had barely used the software at
that point, but I had this sort of impulse to go. Well anyway, I did not have
the money for it at that time. So the fact that there was another one coming
round, was like: Ok, that sounds good. But there was something ... we got
into a conversation on the mailing list. Somebody, a non-native English
speaker was asking about pronouns and gendered pronouns and the proper
way of ‘pronouning’ things. In English we don’t have a suitable gender neutral pronoun. So he asked the questions and some guy responded: The
1
2

http://meeting.contextgarden.net/2013/
Hans Hagen is the principal author and developer of ConTeXt, past president of NTG, and
active in many other areas of the TeX community
Hans Hagen – Interview – TeX Users Group. http://tug.org/interviews/hagen.html, 2006. [Online; accessed 18.12.2014]

47

proper way to do it, is to use he. It’s an invented problem. This whole question is
an invented question and there is no such thing as a need for considering any other
options besides this. 3 So I wrote back and said: That’s not up to you to decide,
because if somebody has a problem, than there is a problem. So I kind of naively
suggested that we could make a Unicode character, that can stand in, like a
typographical element, that does not necessarily have a pronounciation yet.
So something that, when you are reading it, you could either say he or she
or they and it would be sort of [emergent|dialogic|personalized].
Like delayed political correctness or delayed embraciveness. But, little did I
know, that Unicode was not the answer.

Did they tell you that? That Unicode is not the answer?

Well, Arthur actually wrote back 4 , and he knows a lot about Unicode and
he said: With Unicode you have to prove that it’s in use already. In my sense,
Unicode was a playground where I could just map whatever values I wanted
to be whatever glyph I wanted. Somewhere, in some corner of unused
namespace or something. But that’s not the way it works. But TeX works
like this. So I could always just define a macro that would do this. Hans
actually wrote a macro 5 that would basically flip a coin at the beginning of
your paper. So whenever you wanted to use the gender neutral, you would
just use the macro and then it wouldn’t be up to you. It’s another way of
obfuscating, or pushing the responsibility away from you as an author. It’s
like ok, well, on this one it was she, the next it was he, or whatever.

So in a way gender doesn’t matter anymore?

Right. And then I was just like, that’s something we should talk about at the
meeting. I guess I sent out something about my thesis and Hans or Taco,
they know me, they said that it would great for you to do a presentation of
this at the meeting. So that’s very much how I ended up there.
You had never met anyone from ConTeXt before?
3
4
5

http://www.ntg.nl/pipermail/ntg-context/2010/051058.html
http://www.ntg.nl/pipermail/ntg-context/2010/051098.html
http://www.ntg.nl/pipermail/ntg-context/2010/051116.html

48

No. You and Pierre were the only people I knew, that have been using it,
besides me, at the time. It was interesting in that way, it was really ... I mean
I felt a little bit ... nervous isn’t exactly the word, but I sort of didn’t know
what exactly my positon was meant to be. Because these guys ... it’s a users’
meeting, right? But the way that tends to work out for Open Source projects
is developers talking to developers. So ... my presentation was saturated ...
I think, I didn’t realise how quickly time goes in presentations, at the time.
So I spent like 20 minutes just going through my attack on media theory in
the thesis. And there was a guy, falling asleep on the right side of the room,
just head back. So, that was entertaining. To be the black sheep. That’s
always a fun position. It was entertaining for me, to meet these people
and to be at the same time sort of an outsider. Not a really well known
user contrasted with other people, who are more like cornerstones of the
community. They were meeting everybody in person for the first time. And
somehow I could connect. So now, a month and a half later we’re starting
this ConTeXt group, an international ConTeXt users’ group and I’m on the
board, I’m editing the journal. So it’s like, it ...
... that went fast!

It went fast indeed!

What is this ‘ConTeXt User Group’?

To a certain extent the NTG, which is the Netherlands TeX Group, had sort
of been consumed from the inside by the heavyness of ConTeXt, specifically
in the Netherlands. The discussion started to shift to be more ConTeXt.
Now the journal, the MAPS journal, there are maybe 8 or 10 articles, two of
which are not written by either Hans or Taco, who are the main developers
of ConTeXt. And there is zero on anything besides ConTeXt. So the NTG
is almost presented as ok, if you like ConTeXt or if you wanna be in a ConTeXt
user group, you join the NTG. Apparently the journal used to be quite thick
and there are lots of LaTeX users, who are involved. So partially the attempt
is sort of ease that situation a little bit.
It allowed the two communities to separate?
49

Yeah, and not in any way like fast or abrupt fashion. We’re trying to be
very conscious about it. I mean, it’s not ConTeXt’s fault that LaTeX users
are not submitting any articles for the journal. That user group will always have the capacity, those people could step up. The idea is to setup a
more international forum, something that has more of the sense of support
for ... because the software is getting bigger and right now we’re really reliant on this mailing list and if you have your stupid question either Hans,
Taco or Wolfgang will shoot something back. And they become reliant on
Wolfgang to be able to answer questions, because there are more users coming. Arthur was really concerned, among other people, with the scalability
of our approach right now. And how to set up this infrastructure to support
the software as it grows bigger. I should forward you this e-mail that I
wrote, that is a response to their name choices. They were contemplating
becoming a group called ‘cows’. Which is clearly an inside joke because they
loved to do figure demonstrations with cows. And seeing ConTeXt as I do,
as a platform, a serious platform, for the future, something that ... it’s almost like it hasn’t gotten to its ... I mean it’s in such rapid development ...
it’s so undocumented ... it’s so ... like ... it’s like rushing water or something.
But at some point ... it’s gonna fill up the location. Maybe we’re still building this platform, but when it’s solid and all the pieces are ... everything
is being converted to metric, no more inches and miles and stuff. At that
point, when we have this platform, it will turn into a loadable Lua library.
It won’t even be an executable at that point.
It is interesting how quickly you have become part of this community. From being
complete outsider not knowing where to go, to now speaking about a communal
future.
To begin with, I guess I have to confront my own seemingly boundless
propensity for picking obscure projects ... as sort of my ... like the things
that I champion. And ... it often boils down to flexibility.
You think that obscurity has anything to do with the future compatibility of
ConTeXt?
50

Well, no. I think the obscurity is something that I don’t see this actually
lasting for too long in the situation of ConTeXt. As it gets more stable it’s
basically destined to become more of a standard platform. But this is all
tied into to stuff that I’m planning to do with the software. If my generative
typesetting platform ... you know ... works and is actually feasible, which is
maybe a 80% job.

Wait a second. You are busy developing another platform in parallel?

Yes, although I’m kind of hovering over it or sort of superceeding it as
an interface. You have LaTeX, which has been at version 2e since the
mid-nineties, LaTeX 3 is sort of this dim point on the horizon. Whereas
ConTeXt is changing every week. It’s converting the entire structure of this
macro package from being written in TeX to being written in Lua. And
so there is this transition from what could be best described as an archaic
approach to programming, to this shiny new piece of software. I see it as
being competitive strictly because it has so much configurability. But that’s
sort of ... and that’s the double edged sword of it, that the configuration
is useless without the documentation. Donald Knuth is famous for saying
that he realises he would have to write the software and the manual for the
software himself. And I remember in our first conversation about the sort
of paternalistic culture these typographic projects seem to have. Or at least
in the sense of TeX, they seem to sort of coagulate around a central wizard
kind of guy.

You think ConTeXt has potential for the future, while TeX and LaTeX belong
... to the past?

I guess that’s sort of the way it sounds, doesn’t it?

I guess I share some of your excitement, but also have doubts about how far the
project actually is away from the past. Maybe you can describe how you think it
will develop, what will be that future? How you see that?

Right. That’s a good way to start untangling all the stuff I was just talking
about, when I was sort of putting the cart before the horse. I see it developing in some ways ... the way that it’s used today and the way that current,
51

heavy users use it. I think that they will continue to use in it in a similar
way. But you already have people who are utilising LuaTeX ... and maybe
this is an important thing to distinguish between ConTeXt and LuaTeX.
Right now they’re sort of very tied together. Their development is intrinsic,
they drive each other. But to some extent some of the more interesting
stuff that is been being done with these tools is ... like ... XML processing.
Where you throw XML into Lua code and run LuaTeX kerning operations
and line breaking and all this kind of stuff. Things that, to a certain extent,
you needed to engage TeX on its own terms in the past. That’s why macro
packages develop as some sort of sustainable way to handle your workflow.
This introduction of LuaTeX I think is sort of ... You can imagine it being
loaded as a library just as a way to typeset the documentation for code. It
could be like this holy grail of literate programming. Not saying this is the
answer, but that at least it will come out as a nice looking .pdf.

LuaTeX allows the connection to TeX to widen?

Yeah. It takes sort of the essence of TeX. And this is, I guess, the crucial
thing about LuaTeX that up until now TeX is both a typesetting engine and
a programming language. And not a very good one. So now that TeX can
be the engine, the Tschicholdian algorithms, the modernist principles, that,
for whatever reason, do look really good, can be utilised and connected to
without having to deal with this 32 year old macro programming language.
On top of that and part of how directly engaging with that kind of movement foreward is ... not that I am switching over to LuaTeX entirely at this
point ... but that this generative typesetting platform that was sort of the
foundation of this journal proposal we did. Where you could imagine actual
humanity scholars using something that is akin to markdown or a wiki formatting kind of system. And I have a nice little buzzword for that: ‘visually
semantic markup’. XML, HTML, TeX, ... none of those are visually semantic. Because it’s all based around these primitives ‘ok, between the angle
brackets’. Everything is between angle brackets. You have to look what’s
inside the angle brackets to know what is happening to what’s between the
angle brackets. Whereas a visually semantic markup ... OK headers! OK
so it’s between two hashmarks or it’s between two whatever ... The whole
52

design of those preformatting languages, maybe not wiki markup, but at
least markdown was that it could be printed as a plaintext document and
you could still get a sense of the structure. I think that’s a really crucial
development. So ... in a web browser, on one half of the browser you have
you text input, on the other half you have an real-time rendering of it into
HTML. In the meantime, the way that the interface works, the way that
the visually semantic markup works, is that it is a mutable interface. It
could be tailored to your sense of what it should look like. It can be tailored
specifically to different workflows. And because there is such a diversity
within typographic workflows, typesetting workflows ... that is akin to the
separation of form and content in HTML and CSS, but it’s not meant to be
... as problematic as that. I’m not sure if that is a real goal, or if that goal
is feasible or not. But it’s not meant to be drawing an artificial line, it’s just
meant to make things easier.

So by pulling apart historically grown elements, it becomes ... possibly modern?
Hypermodern?

Something for now and later.

Yes. Part of this idea, the trick ... This software is called ‘Subtext’ and at
this point it’s a conceptual project, but that will change pretty soon. Its
trick is this idea of separation instead of form and content, it’s translation
and effect. The parser itself has to be mutable, has to be able to pull in
the interface, print like decorations basically from a YAML configuration
file or some sort of equivalent. One of this configuration mechanisms that
was designed to be human readable and not machine readable. Like, well
both, striking that balance. Maybe we can get to that kind of ... talking
about agency a little bit. Its trick to really pull that out so that if you want
to ... for instance now in markdown if you have quotes it will be translated
in ConTeXt into \quotation. In ConTeXt that’s a very simple switch
to turn it into German quotes. Or I guess that’s more like international
quotes, everything not English. For the purposes of markdown there is
no, like really easy way, to change that part of the interface. So that when
53

I’m writing, when I use the angle brackets as a quote it would turn into
a \quotation in the output. Whereas with ‘Subtext’ you would just go
into the interface type like configuration and say: These are converted into
a quote basically. And then the effects are listed in other configuration files
so that the effects of quotes in HTML can be ...
... different.

Yes. Maybe have specific CSS properties for spacing, that kind of stuff. And
then in ConTeXt the same sort of ... both the environmental setup as well
as the raw ‘what is put into the document when it’s translated’. This kind of
separation ... you know at that point if both those effects are already the way
that you want them, then all you have to do is change the interface. And
then later on typesetting system, maybe iTeX comes out, you know, Knuth’s
joke, anyway. 6 That kind of separation seems to imply a future proofing
that I find very elegant. That you can just add later on the effects that you
need for a different system. Or a different version of a system, not that you
have to learn ‘mark 6’, or something like that ...
Back to the future ... I wonder about ConTeXt being bound to a particular
practise located with two specific people. Those two are actually the ones that
produce the most complete use cases and thereby define the kind of practise that
ConTeXt allows. Do you think this is a temporary stage or do you think that by
inviting someone like you on the board, as an outsider, that it is a sign of things
going to change?
Right. Well, yeah, this is another one of those put-up or shut-up kind of
things because for instance at the NTG meeting on Wednesday my presentation was very much a user presentation in a room of developers. Because I
basically was saying: Look like this is gonna be a presentation – most presentation are about what you know – and this presentation is really about
what I don’t know ... but what I do know is that there is a lot of room for
teaching ConTeXt in a more practical fashion, you could say. So my idea is
to basically write this documentation on how to typeset poetry, which gets
6

http://en.wikipedia.org/wiki/Donald_Knuth#Humor

54

into a lot of interesting questions, just a lot of interesting things. Like you
gonna need to write your own macros just at the start ... to make sure you
have not to go in and change every width value at some point. you know,
this kind of thing like ... really baby steps. How to make a cover page. These
kinds of things are not documented.
Documentation is let’s say an interesting challenge for ConTeXt. How do you
think the ConTeXt community could enable different kinds of use, beyond the
ones that are envisioned right now? I guess you have a plan?

Yeah ... that’s a good question. Part of it is just to do stuff, like to get you
more involved in the ConTeXt group for instance, because I was talking to
Arthur and he hadn’t even read the article from V/J10 7 . I think that kind
of stuff is really important. It’s like the whole Blender Foundation kind
of impulse. We have some developers who are paid to do this and that’s
kind of rare already in an Open Source/Free Software project. But then to
kind of have users pushing the boundaries and hitting limits. It’s rare that
Hans will encounter some kind of use case that he didn’t think of and react
in a negative way. Or react in a way like I’m not gonna even entertain that
possibility. Part of it is moving beyond this ... even the sort of centralisation
as you call it ... how to do that directly ... I see it more as baby steps for
me personally at this point. Just getting a tutorial on how to typeset a cd
booklet. Just basically what I’m writing. That at the same time, you know,
gets you familiar with ConTeXt and TeX in general. Before my presentation
I was wondering, I was like: how do you set a variable in TeX. Well, it’s a
macro programming language so you just make a macro that returns a value.
Like that kind of stuff is not initially obvious if you’re used to a different
paradigm or you know .. So these baby steps of kind of opening the field up
a little bit and then using it my own practise of guerilla typesetting and kind
of putting it out there. and you know ... And people gonna start being like:
oh yeah, beautiful documents are possible or at least better looking documents
are possible. And then once we have them at that, like, then how do you we
7

Constant, Clementine Delahaut, Laurence Rassel, and Emma Sidgwick.
Verbindingen/Jonctions: Tracks in electr(on)ic fields. Constant Verlag, 2009.
http://ospublish.constantvzw.org/sources/vj10

55

take it to the next level. How do I turn a lyric sheet from something that
is sort of static to ... you know ... two pages that are like put directly on the
screen next to each other. Like a screen based system where it’s animated
to the point ... and this is what we actually started to karaoke last night ...
so you have an English version and a Spanish version – for instance in the
case of the music that I’ve been doing. And we can animate. We can have
timed transitions so you can have a ‘current lyric indicator’ move down the
page. That kind of use case is not something that Pragma 8 is ever going
to run into. But as soon as it is done and documented then what’s the next
thing, what kind of animations are gonna be ... or what kind of ... once that
possibility is made real or concrete ... you know, so I kind of see it as a very
iterative process at this point. I don’t have any kind of grand scheme other
than ‘Subtext’ kind of replacing Microsoft Word as the dominant academic
publishing platform, I think. (laughs)

Just take over the world.

That’s one way to do it, I think.

You talked about manuals for things that you would maybe not do in another
kind of software ...

Right.

Manuals that not just explain ‘this is how you do it’ but also ‘this is the kind of
user you could be’.

Right.

I’m not sure if instructions for how to produce a cd cover would draw me in, but
if it helped me understand how to set a variable, it would.
Right.
8

Hans Hagen’s company for Advanced Document Engineering

56

You want the complete manual of course?
Yeah!

You were saying that ConTeXt should replace Microsoft Word as the standard
typesetting tool for academic publishing. You are thinking about the future for
ConTeXt more in the context of academic publishing than in traditional design
practise?

Yes. In terms of ‘Subtext’, I mean the origins of that project, very much
... It’s an interesting mix because it’s really a hybridity of many different
processes. Some, much come directly from this obscure art project ‘the abstraction’. So I have stuff like the track changes using Git version control
and everything being placed on plaintext as a necessity. That’s a holdover
from that project as well as the idea of gradiated presence. Like software
enabling a more real-time peer review, anonymous peer review system. And
even a collaborative platform where you don’t know who you’re writing with,
until the article comes out. Someting like out that. So these interesting
tweaks that you can kind of make, those all are holdovers from this very,
very much maybe not traditional design practise but certainly like ... twisted
artistic project that was based around hacking a hole from signified to siginifier and back again. So ... In terms of its current envisionment and the
use case for which we were developing it at the beginning, or I’m developing
it, whatever ... I’ll say it the royal way, is an academic thing. But I think
that ... doesn’t have to stop there and ...

At some point at OSP we decided to try ConTeXt because we were stuck with
Scribus for page layout as the only option in Free Software. We wanted escape
that kind of stiffness of the page, or of the canvas in a way. But ConTeXt
was not the dream solution either. For us it had a lot to do, of course, with
issues of documentation ... of not understanding, not coming from that kind of
automatism of treating it as another programming language. So I think we could
have had much more fun if we had understood the culture of the project better.
I think the most frustrating experience was to find out how much the model of
typesetting is linked to the Tschichold universe, that at the moment you try to
57

break out, the system completely looses all flexibility. And it is almost as if you
can hear it freeze. So if we blame half of our troubles with ConTeXt on our
inability to actually understand what we could do with ConTeXt, I think there is
a lot also in its assumption what a legible text would look like, how it’s structured,
how it’s done. Do you think a modern version of ConTeXt will keep that kind
of inflexibility? How can it become more flexible in it’s understanding of what a
page or a book could be?

That’s an interesting question, because I’m not into the development side
of LuaTex at all, but I would be surprised if the way that it was being
implemented was not significantly more modular than for instance when
it was written in Pascal, you know, how that was. Yeah, that’s a really
interesting question of how swappable is the backend. How much can we
go in and kind of ... you know. And it its an inspirational question to me,
because now I’m trying to envision a different page. And I’m really curious
about that. But I think that ConTeXt itself will likely be pretty stable in its
scope ... in that way of being ... sort of ... deterministic in its expectations.
But where that leaves us as users ... first I’d be really surprised if the engine
itself, if LuaTeX was not being some way written to ... I feel really ignorant
about this, I wish I just knew. But, yeah, there must be ... There is no way
to translate this into a modern programming language without somehow
thinking about this in terms of the design. I guess to certain extent the
answer to your question is dependent on the conscientiousness of Taco and
the other LuaTex developers for this kind of modularity. But I don’t ... you
know ... I’m actually feeling very imaginatively lacking in terms of trying to
understand what you’re award-winning book did not accomplish for you ...
Yeah, what’s wrong with that?

I think it would be good to talk with Pierre, not Pierre Marchand but Pierre ...
... Huggybear.

Yeah. We have been talking about ‘rivers’ as a metaphor for layout ... like were
you could have things that are ... let’s say fluid and other things that could be
placed and force things around it. Layout is often a combination of those two
58

things. And this is what is frustrating in canvas based layout that it is all fixed
and you have to make it look like it’s fluid. And here it’s all fluid and sometimes
you want it to be fixed. And at the moment you fix something everything breaks.
Then it’s up to you. You’re on your own.

Right.

The experience of working with ConTeXt is that it is very much elastic, but there
is very little imagination about what this elasticity could bring.
Right.

It’s all about creating universally beautiful pages, in a way it is using flexibility
to arrive at something that is already fixed.

Right.

Well, there is a lot more possible than we ever tried, but ... again ... this goes
back to the sort of centralist question: If those possibilities are mainly details in
the head of the main developers than how will I ever start to fantasize about the
book I would want to make with it?

Right.

I don’t even need access to all the details. Because once I have a sort of sense of
what I want to do, I can figure it out. Right now you’re sort of in the dark about
the endless possibilities ...

Its existence is very opaque in some ways. The way that it’s implemented,
like everything about it is sort of ... looking at the macros that they wrote,
the macros that you invoke ... like ... that takes ... flow control in TeX is like
... I mean you might as well write it in Bash or ... I mean I think Bash would
even be more sensible to figuring out what’s going on. So, the switch to Lua
there is kind of I think a useful step just in being more transparent. To allow
you to get into becoming more intimate with the source or the operation
59

of the system ... you know ... without having to go ... I mean I guess ... the
TeX Book would still be useful in some ways but that’s ... I mean ... to go
back and learn TeX when you’re just trying to use ConTeXt is sort of ...
it’s not ... I’m not saying it’s, you know ... it’s a proper assumption to say oh
yeah, don’t worry about the rules and the way TeX is organised but you’re not
writing your documents in ConTeXt the way you would write them if you’re
using plain TeX. I mean that’s just ... it’s just not ... It’s a different workflow
... it has a completely different set of processes that you need to arrange. So
it has a very distinct organisational logic ... that I think that ... yeah ... like
being able to go into the source and be like oh OK, like I can see clearly this
is ... you know. And then you can write in your own way, you can write back
in Lua.

This kind of documentation would be the killer feature of ConTeXt ...
Yeah.

It’s kind of strange paradox in the TeX community. At one hand you’re sort of
supposed to be able to do all of it. But at the same time on every page you’re told
not to do it, because it’s not for you to worry about this.

Right. That’s why the macro packages exist.

With ConTeXt there is this strange sense of very much wanting to understand the
way the logic works, or ... what the material is, you’re dealing with. And at the
same time being completely lost in the labyrinth between the old stuff from TeX
and LaTeX, the newer stuff from LuaTex, Mark 4, 3, 5, 6 ...

So that was sort of my idea with the cd typesetting project, is not to say,
that that is something that is immediately interesting to anybody who is
not trying to do that specifically, right? But at the same time if I’m ... if it’s
broken down into ‘How to do a bitmap cover page’ (=Lesson 1).
Lesson 2: ‘How to start defining you own macros’. And so you know, it’s
this thing that could be at one point a very ... because the documentation as
it stands right now is ... I think it’s almost ... fixing that documentation, I’m
60

not sure is even possible. I think that it has to be completely approached
differently. I mean, like a real ConTeXt manual, that documents ... you
know ... command by command exactly what those things do. I mean our
reference manual now just shows you what arguments are available, but
doesn’t even list the available arguments. It’s just like: These are the positions
of the arguments. And it’s interesting.

So expecting writers of the program to write the manual fails?
Right.

What is the difference between your plans for ‘Subtext’ and a page layout program
like Scribus?

You mentioned ‘Subtext’ coming from a more academic publishing rather
than a design background. I think that this belies where I have come into
typesetting and my understanding of typography. Because in reality DTP
has never kind of drawn me in in that way. The principle differences are
really based on this distribution of agency, in my mind. That when you’re
demanding the software to be ‘what you see is what you get’ or when you
place that metaphor between you and your process. Or you and your engagement, you’re gaining the usefulness of that metaphor, which is ... it’s
almost ... I hope I don’t sound offensive ... but it’s almost like child’s play.
It’s almost like point, click, place. To me it just seems so redundant or ...
time-consuming maybe ... to really deal with it that way. There are advantages to that metaphor. For instance I don’t plan on designing covers in
ConTeXt. Or even a poster or something like that. Because it doesn’t really
give affordances for that kind of creativity. I mean you can do generative
stuff with the MetaFun package. You can sort of play around with that. But
I haven’t seen a ConTeXt generated cover that I liked, to be honest.

OK.

OK. Principle differences. I’m trying to ... I’m struggling a little bit. I think
that’s partially because I’m not super comfortable with the layout mechanism
61

and stuff yet. And you have things like \blank in order to move down the
page. Because it has this sort of literal sense of a page and movement on
a page. Obviously Scribus has a literal idea of a page as well, but because
it’s WYSIWYG it has that benefit where you don’t have to think OK, well,
maybe it should be 1.6 ems down or maybe it should be 1.2 ems down. You
move it until it looks right. And then you can measure it and you’re like
ok, I’m gonna use this measurement for the further on in my document. So it’s
that whole top-down vs. bottom-up approach. It really breaks down into
the core organisational logics of those softwares.
I think it’s too easy to make the difference based on the fact that there is a
metaphorical layer or not. I think there is a metaphorical layer in ConTeXt too
...

Right. Yeah for sure.

And they come at a different moment and they speak a different language. But I
think that we can agree that they’re both there. So I don’t think it’s about the one
being without and the other being with. Of course there is another sense of placing
something in a canvas-based software than in a ... how would you call this?

So I guess it is either ‘declarative’ or ‘sequence’ based. You could say generative in a way ... or compiled or ... I don’t even know. That’s a cool question.

What is the difference really and why would you choose the one or the other? Or
what would you gain from one to the other? Because it’s clear that posters are not
easily made in ConTeXt. And that it’s much easier to typeset a book in ConTeXt
than it is in Scribus, for example.

Declarative maybe ...

So, there’s hierarchy. There’s direction. There’s an assumption about structure
being good or bad.
62

Yeah. Boxes, Glue. 9

What is exciting in something like this is that placement is relative always.
Relative to a page, relative to a chapter, relative to itself, relative to what’s next
to it. Where in a canvas based software your page is fixed.

Right.

This is very different from a system where you make a change, then you compile
and then you look at it and then you go back into your code. So where there is a
larger distinction between output and action. It’s almost gestural ...

It’s like two different ways of having a conversation. Larry Wall has this really great metaphor. He talks about ‘ballistic design’. So when you’re doing
code, maybe he’s talking more about software design at this point, basically
it’s a ‘ballistic practise’ to write code. Ballistics comes from artillery. So you
shoot at a thing. If you hit it, you hit it. If you miss it, you change the
amount of gun powder, the angle. So code is very much a ‘ballistic practise’.
I think that filters into this difference in how the conversation works. And
this goes back to the agencies where you have to wait for the computer to
figure out. To come with its into the conversation. You’re putting the code
in and then the computer is like ok; this is what the code means
and then is this what you wanted? Whereas with the WYSIWYG
kind of interface the agency is distributed in a different way. The computer is just like ok, I m a canvas; I m just here to hold what
you re putting on and I m not going to change it any way or
affect it in any way that you don t tell me to. I mean it’s
the same way but I ... is it just a matter of the compilation time? In one
you’re sort of running a experiment, in another you’re just sort of painting.
If that’s a real enough distinction or if that’s ... you know ... it’s sort of ... I
mean I kind of see that it is like this. There is ballistics vs. maybe fencing
or something.
9

Boxes, which are things can be drawn on a page, and glue, which is invisible stretchy stuff that sticks
boxes together. Mark C. Chu-Carroll. The Genius of Donald Knuth: Typesetting with Boxes and Glue, 2008

63

Fencing?

Fencing. Like more of a ...
Or wrestling?

Or wrestling.

When you said just sort of painting I felt offended. ( laughs)
I’m sorry. I didn’t mean it like that.

Maybe back to wrestling vs. ballistics. Where am I and where is the machine?
Right.

I understand that there’s lots of childish way of solving this need to make the
computer dissapear. Because if you are not wrestling ... you’re dancing, you know.

Yeah.

But I think it’s interesting to see that ballistics, that the military term of shooting
at something, is the kind of metaphor to be used. Which is quite different than a
creative process where there is a direct feedback between something placed and the
responses you have.
Right.

And it’s not always about aiming, but also sometimes about trying and about
kind of subtle movements that spark off something else. Which is very immediate.
And needs an immediate connection to ... let’s say ... what you do and what you
get. It would be interesting to think about ways to talking about ‘what you see
is what you get’ away from this assumption that is always about those poor users
that are not able do it in code.

Right.

64

Because I think there is essential stuff that you can not do in a tool like this –
that you can do in canvas-based tools. And so ... I think it’s really a pity when
... yeah ... It’s often overlooked and very strange to see. There is not a lot of good
thinking about that kind of interaction. Like literal interaction. Which is also
about agency with the painter. With the one that makes the movement. Where
here the agency is very much in this confrontational relation between me aiming
and ...

So yeah, when we put it in those metaphors. I’m on the side with the
painting, because ...

But I mean it’s difficult to do a book while wrestling. And I think that’s why a
poster is very difficult to do in this sort of aiming sense. I mean it’s fun to do but
it’s a strange kind of posters you get.

You can’t fit it all in your head at once. It’s not possible.
No. So it’s okay to have a bit of delay.

I wondered to what extent, if it were updated in real time, all the changes
you’re making in the code, if compilation was instantaneous, how that would
affect the experience. I guess it would still have this ballistic aspect, because
what you are doing is ... and that’s really the side of the metaphor ... or
a metaphorical difference between the two. One is like a translation. The
metaphor of ok this code means this effect ... That’s very different from picking
a brush and choosing the width of the stroke. It’s like when you initialise
a brush in code, set the brush width and then move it in a circle with a
radius of x. It’s different than taking the brush in Scribus or in whatever
WYSIWYG tool you are gonna use. There is something intrinsically different about a translation from primitives to visual effect than this kind of
metaphorical translation of an interaction between a human and a canvas ...
kind of put into software terms.

But there is a translation from me, the human, to the machine, to my human eye
again, which is hard to grasp. Without wanting it to be made invisible somehow.
65

Or to assume that it is not there. This would be my dream tool that would
allow you to sense that kind of translation without losing the ... canvasness of the
canvas. Because it’s frustrating that the canvas has to not speak of itself to be able
to work. That’s a very sad future for the canvas, I think.

I agree.

But when it speaks of itself it’s usually seen as buggy or it doesn’t work. So that’s
also not fair to the canvas. But there is something in drawing digitally, which
is such a weird thing to do actually, and this is interesting in this sort of cyborgs
we’re becoming, which is all about forgetting about the machine and not feeling
what you do. And it’s completely a different world in a way than the ballistics of
ConTeXt, LaTeX or whatever typesetting platform.

Yeah, that’s true. And it’s something that my students were forced to confront and it was really interesting because that supposed invisibility or almost
necessitated invisibility of the software. As soon as they’re in Inkscape instead of Illustrator they go crazy. Because it’s like they know what they want
to do, but it’s a different mechanism. It’s the same underlying process which
itself is only just meant to give you a digital version of what you could easily
do on a piece of paper. Provided you have the right paints and stuff. So
perhaps it’s like the difference between moving from a brush to an air brush.
It’s a different ... interface. It’s a different engagement. There is a different
thing between the human and the canvas. You engage in this creative process where it’s like ok, we’ll now have an airbrush and I can play around to
see what the capacities are without being stuck in well I can’t get it to do
my fine lines the same way I can when I have my brush. It’s like when you
switch the software out from between the person and the canvas. It’s that
sort of invisibility of the interface and it’s intense for people. They actually
react quite negatively. They’re not gonna bother to learn this other software
because in the end they’re doing less. The reappearance of this software
... of software between them and their ideas is kinda too much. Whereas
people who don’t have any preconceived notions are following the tutorials
and they’re learning and they’re like ok, I’m gonna continue to play with this.
Because this software is starting to become more invisible.
66

But on a sort of theoretical level the necessitated invisibility, as you said it nicely, is
something I would always speak against. Because that means you hide something
that’s there. Which seems a stupid thing to do, especially when you want to find
a kind of more flexible relation to your tools. I want to find a better word for
describing that sort of quick feedback. Because if it’s too much in the way, then
the process stops. The drawing can not be made if I’m worried too much about
the point of my pencil that might break ... or the ... I dont’t know ... the nozzle
being blocked.
Dismissing the other tools is ... I was kinda joking, but ... there is something sort of blocklike: Point. Move. This. But at the same time, like I
said, I wouldn’t do a cover in ConTeXt. Just like I probably wouldn’t try to
do something like a recreation of a Pre-Raphaelite painting in Processing or
something like that. There is just points where our metaphors break down.
And so ... It sounded sort of, ok, bottom-up über alles like always.

Ok, there’s still painters and there’s still people doing Pre-Raphaelite paintings
with Pre-Raphaelite tools, but most of us are using computers. So there should be
more clever ways of thinking about this.
Yeah. To borrow a quote from my old buddy Donald Rumsfeld: There are
the known knowns, the known unknowns and the unknown unknowns. That
actually popped into my head earlier because when we were talking about
the potentials of the software and the way that we interact and stuff, it’s like
we know that we don’t know ... other ways of organizing. We know that
there are, like there has to be, another way, whether it is a middle path between these two or some sort of ... Maybe it’s just tenth dimensional, maybe
it’s fourth dimensional, maybe it’s completely hypermodern or something.
Anyway. But the unknown unknowns ... It’s like the stuff that we can’t
even tell we don’t know about. The questions that we don’t know about
that would come up once we figure out these other ways of organising it.
That’s when I start to get really interested in this sort of thing. How do you
even conceive of a practise that you don’t know? And once you get there,
there’s going to be other things that you know you don’t know and have to
keep finding them. And then there’s gonna be things that you don’t know
you don’t know and they just appear from nowhere and ... it’s fun.
67

We discovered the work of Tom Lechner for the first time at
the Libre Graphics Meeting 2010 in Brussels. Tom traveled
from Portland to present Laidout, an amazing tool that he
made to produce his own comic books and also to work on
three dimensional mathematical objects. We were excited
about how his software represents the gesture of folding,
loved his bold interface decisions plus were impressed by the
fact that Tom decided to write his own programming framework for it. A year later, we met again in Montreal, Canada
for the Libre Graphics Meeting 2011 where he presents a
follow-up. With Ludivine Loiseau 1 and Pierre Marchand 2 ,
we finally found time to sit down and talk.
What is Laidout?

Well, Laidout is software that I wrote to lay out my cartoon books in an
easy fashion. Nothing else fit my needs at the time, so I just wrote it.
It does a lot more than laying out cartoons?

It works for any image, basically, and gradients. It does not currently do
text. It is on my todo list. I usually write my own text, so it does not really
need to do text. I just make an image of it.
It can lay out T-shirts?

But that’s all images too. I guess it’s two forms of laying out. It’s laying
out pieces of paper that remain whole in themselves, or you can take an
image and lay it out on smaller pieces of paper. Tiling, I guess you could
call it.
Can you talk us through the process of doing the T-shirt?

1
2

amateur bookbinder and graphic designer
artist/developer, contributing amongst others to PodofoImpose and Scribus

71

OK. So, you need a pattern. I had just a shirt that sort of fit and I
approximated it on a big piece of paper, to figure out what the pieces were
shaped like, and took a photograph of that. I used a perspective tool to
remove the distortion. I had placed rulers on the ground so that I could
remember the actual scale of it. Then once it was in the computer, I traced
over it in Inkscape, to get just the basic outline so that I could manipulate
further. Blender didn’t want to import it so I had to retrace it. I had to
use Blender to do it because that lets me shape the pattern, take it from
flat into something that actually makes 3D shapes so whatever errors were
in the original pattern that I had on the paper, I could now correct, make
the sides actually meet and once I had the molded shape, and in Blender
you have to be extremely careful to keep any shape, any manipulation that
you do to make sure your surface is still unfoldable into something flat. It is
very easy to get away from flat surfaces in Blender. Once I have the molded
shape, I can export that into an .off file which my unwrapper can import
and that I can then unwrap into the sleeves and the front and the back as
well as project a panoramic image onto those pieces. Once I have that, it
becomes a pattern laid out on a giant flat surface. Then I can use Laidout
once again to tile pages across that. I can export into a .pdf with all the
individual pieces of the image that were just pieces of the larger image that
I can print on transfer paper. It took forty iron-on transfer papers I ironed
with an iron provided to me by the people sitting in front of me so that
took a while but finally I got it all done, cut it all out, sewed it up and there
you go.
Could you say something about your interest in moving from 2D to 3D
and back again? It seems everything you do is related to that?
I don’t know. I’ve been making sculpture of various kinds for quite a
long time. I’ve always drawn. Since I was about eighteen, I started making
sculptures, mainly mathematical woodwork. I don’t quite have access to a
full woodwork workshop anymore, so I cannot make as much woodwork as
I used to. It’s kind of an instance of being defined by what tools you have
available to you, like you were saying in your talk. I don’t have a woodshop,
but I can do other stuff. I can still make various shapes, but mainly out of
paper. Since I had been doing woodwork, I picked up photography I guess
and I made a ton of panoramic images. It’s kind of fun to figure out how
72

to project these images out of the computer into something that you can
physically create, for instance a T-shirt or a ball, or other paper shapes.
Is there ever any work that stays in the computer, or does it always need
to become physical?

Usually, for me, it is important to make something that I can actually
physically interact with. The computer I usually find quite limiting. You
can do amazing things with computers, you can pan around an image, that
in itself is pretty amazing but in the end I get more out of interacting with
things physically than just in the computer.
But with Laidout, you have moved folding into the computer! Do you
enjoy that kind of reverse transformation?

It is a challenge to do and I enjoy figuring out how to do that. In making
computer tools, I always try to make something that I can not do nearly as
quickly by hand. It’s just much easier to do in a computer. Or in the case
of spherical images, it’s practically impossible to do it outside the computer.
I could paint it with airbrushes and stuff like that but that in itself would
take a hundred times longer than just pressing a couple of commands and
having the computer do it all automatically.

My feeling about your work is that the time you spent working on the
program is in itself the most intriguing part of your work. There is of course a
challenge and I can imagine that when you are doing it like the first time you
see a rectangle, and you see it mimic a perspective you think wow I am folding
a paper, I have really done something. I worked on imposition too but more
to figure out how to work with .pdf files and I didn’t go this way of the gesture
like you did. There is something in your work which is really the way you wrote
your own framework for example and did not use any existing frameworks. You
didn’t use existing GUIs and toolboxes. It would be nice to listen to you about
how you worked, how you worked on the programming.
I think like a lot of artists, or creative people in general, you have to
enjoy the little nuts and bolts of what you’re doing in order to produce any
final work, that is if you actually do produce any final work. Part of that is
making the tools. When I first started making computer tools to help me
73

in my artwork, I did not have a lot of experience programming computers.
I had some. I did little projects here and there. So I looked around at the
various toolkits, but everything seemed really rigid. If you wanted to edit
some text, you had this little box and you write things in this little box and
if you want to change numbers, you have to erase it and change tiny things
with other tiny things. It’s just very restrictive. I figured I could either
figure out how to adapt those to my own purposes, or I could just figure
out my own, so I figured either way would probably take about that same
amount of time I guessed, in my ignorance. In the process, that’s not quite
been true. But it is much more flexible, in my opinion, what I’ve developed,
compared to a lot of other toolkits. Other people have other goals, so I’m
sure they would have a completely different opinion. For what I’m doing,
it’s much more adaptable.
You said you had no experience in programming? You studied in art school?

I don’t think I ever actually took computer programming classes. I grew
up with a Commodore 64, so I was always making letters fly around the
screen and stuff like that, and follow various curves. So I was always doing
little programming tricks. I guess I grew up in a household where that
sort of thing was pretty normal. I had two brothers, and they both became
computer programmers. And I’m the youngest, so I could learn from their
mistakes, too. I hope.
You’re looking for good excuses to program.
(laughs) That could be.

We can discuss at length about how actual toolkits don’t match your needs,
but in the end, you want to input certain things. With any recent toolkit, you
can do that. It’s not that difficult or time consuming. The way you do it, you
really enjoy it, by itself. I can see it as a real creative work, to come up with new
digital shapes.
Do you think that for you, the program itself is part of the work?

I think it’s definitely part of the work. That’s kind of the nuts and bolts
that you have to enjoy to get somewhere else. But if I look back on it, I
74

spend a huge amount of time just programming and not actually making
the artwork itself. It’s more just making the tools and all the programming
for the tools. I think there’s a lot of truth to that. When it comes time to
actually make artwork, I do like to have the tool that’s just right for the job,
that works just the way that seems efficient.
I think the program itself is an artwork, very much. To me it is also
a reflection on moving between 2D and 3D, about physical computation.
Maybe this is the actual work. Would you agree?
I don’t know. To an extent. In my mind, I kind of class it differently.
I’ve certainly been drawing more than I’ve been doing technical stuff like
programming. In my mind, the artwork is things that get produced, or a
performance or something like that. And the programming or the tools
are in service to those things. That’s how I think of it. I can see that ...
I’ve distributed Laidout as something in itself. It’s not just some secret tool
that I’ve put aside and presented only the artwork. I do enjoy the tools
themselves.
I have a question about how the 2D imagines 3D. I’ve seen Pierre and
Ludi write imposition plans. I really enjoy reading this, almost as a sort of
poetry, about what it would be to be folded, to be bound like a book. Why is
it so interesting for you, this tension between the two dimensions?
I don’t know. Perhaps it’s just the transformation of materials from
something more amorphous into something that’s more meaningful, somehow. Like in a book, you start out with wood pulp, and you can lay it out in
pages and you have to do something to that in order to instil more meaning
to it.
Is binding in any way important to you?
Somewhat. I’ve bound a few things by hand. Most of my cartoon books
ended up being just stapled, like a stack of paper, staple in the middle and
fold. Very simple. I’ve done some where you cut down the middle and lay
the sides on top and they’re perfect bound. I’ve done just a couple where
it’s an actual hand bound, hard cover. I do enjoy that. It’s quite a time
75

consuming thing. There’s quite a lot of craft in that. I enjoy a lot of hand
made, do-it-yourself activities.
Do you look at classic imposition plans?

I guess that’s kind of my goal. I did look up classic book binding
techniques and how people do it and what sort of problems they encounter.
I’m not sure if I’ve encompassed everything in that, certainly. But just the
basics of folding and trimming, I’ve done my best to be able to do the same
sort of techniques that have been done in the past, but only manually. The
computer can remember things much more easily.
Imposition plans are quite fixed, you have this paper size and it works with
specific imposition plans. I like the way your tool is very organic, you can play
with it. But in the end, something very classic comes out, an imposition plan you
can use over and over, which gives a sort of continuity.
What’s impressive is the attention you put into the visualization. There are
some technical programs which do really big imposition stuff, but it’s always at the
printer. Here, you can see the shape being peeled. It’s really impressive. I agree
with Femke that the program is an artwork too, because it’s not only technical,
it’s much more.
How is the material imagined in the tool?

So, far not really completely. When you fold, you introduce slight twists
and things like that. And that depends on the stiffness of the paper and
the thickness of the paper and I’ve not adequately dealt with that so much.
If you just have one fold, it’s pretty easy to figure out what the creep is for
that. You can do tests and you can actually measure it. That’s pretty easy
to compensate for. But if you have many more folds than that, it becomes
much more difficult.
Are you thinking about how to do that?

I am.

That would be very interesting. To imagine paper in digital space, to give
an idea of what might come out in the end. Then you really have to work
your metaphors, I think?
76

A long time ago, I did a lot of T-shirt printing. Something that I did not
particularly have was a way to visualize your final image on some kind of shirt
and the same thing applies for book binding, too. You might have a strange
texture. It would be nice to be able to visualize that beforehand, as well
as the thickness of the paper that actually controls physical characteristics.
These are things I would like to incorporate somehow but haven’t gotten
around to.
You talked about working with physical input, having touchpads ... Can
you talk a bit more about why you’re interested in this?

You can do a lot of things with just a mouse and a keyboard. But it’s
still very limiting. You have to be sitting there, and you have to just control
those two things. Here’s your whole body, with which you can do amazing
things, but you’re restricted to just moving and clicking and you only have a
single point up on the screen that you have to direct very specifically. It just
seems very limiting. It’s largely an unexplored field, just to accept a wider
variety of inputs to control things. A lot of the multitouch stuff that’s been
done is just gestures for little tiny phones. It’s mainly for browsing, not
necessarily for actual work. That’s something I would like to explore quite a
lot more.
Do you have any fantasies about how these gestures could work for real?

There’s tons of sci fi movies, like ‘Minority Report’, where you wear these
gloves and you can do various things. Even that is still just mainly browsing.
I saw one, it was a research project by this guy at Caltech. He had made
this table and he wore polarized glasses so he could look down at this table
and see a 3D image. And then he had gloves on, and he could sculpt things
right in the air. The computer would keep track of where his hand is going.
Instead of sculpting clay, you’re sculpting this 3D mesh. That seemed quite
impressive to me.
You’re thinking about 3D printers, actually?

It’s something that’s on my mind. I just got something called the
Eggbot. You can hold spheres in this thing and it’s basically a plotter that
can print on spherical surfaces or round surfaces. That’s something I’d like
77

to explore some more. I’ve made various balls with just my photographic
panoramas glued onto them. But that could be used to trace an outline for
something and then you could go in with pens or paints and add more detail.
If you’re trying to paint on a sphere, just paint and no photograph, laying out
an outline is perhaps the hardest part. If you simplify it, it becomes much
easier to make actual images on spheres. That would be fun to explore.

I’d like to come back to the folding. Following your existing aesthetic, the
stiffness and the angles of the drawing are very beautiful. Is it important you,
preserving the aesthetic of your programs, the widgets, the lines, the arrows ...

I think the specific widgets, in the end, are not really important to me
at all. It’s more just producing an actual effect. So if there is some better
way, more efficient way, more adaptable way to produce some effect, then it’s
better to just completely abandon what doesn’t work and make something
that’s new, that actually does work. Especially with multitouch stuff, a lot of
old widgets make no more sense. You have to deal with a lot of other kinds
of things, so you need different controls.

It makes sense, but I was thinking about the visual effect. Maybe it’s not
Laidout if it’s done in Qt.
Your visuals and drawings are very aesthetically precise. We’re wondering
about the aesthetics of the program, if it’s something that might change in the
future.
You mean would the quality of the work produced be changed by the
tools?

That’s an interesting question as well. But particularly the interface, it’s
very related to your drawings. There’s a distinct quality. I was wondering
how you feel about that, how the interaction with the program relates to the
drawings themselves.

I think it just comes back to being very visually oriented. If you have to
enter a lot of values in a bunch of slots in a table, that’s not really a visual
way to do it. Especially in my artwork, it’s totally visual. There’s no other
component to it. You draw things on the page and it shows up immediately.
78

It’s just very visual. Or if you make a sculpture, you start with this chunk
of stuff and you have to transform it in some way and chop off this or sand
that. It’s still all very visual. When you sit down at a computer, computers
are very powerful, but what I want to do is still very visually oriented. The
question then becomes: how do you make an interface that retains the visual
inputs, but that is restricted to the types of inputs computers need to have
to talk to them?
The way someone sets up his workshop says a lot about his work. The way
you made Laidout and how you set up its screen, it’s important to define a spot
in the space of the possible.

What is nice is that you made the visualisation so important. The windows
and the rest of the interface is really simple, the attention is really focused on
what’s happening. It is not like shiny windows with shadows everywhere, you feel
like you are not bothered by the machine.
At the same time, the way you draw the thickness of the line to define the
page is a bit large. For me, these are choices, and I am very impressed because I
never manage to make choices for my own programs. The programs you wrote,
or George Williams, make a strong aesthetic assertion like: This is good. I can’t
do this. I think that is really interesting.
Heavy page borders, that still comes down to the visual thing you end
up with, is still the piece of paper so it is very important to find out where
that page outline actually is. The more obvious it is, the better.

Yes, I think it makes sense. For a while now, I paid more attention than
others in Scribus to these details like the shape of the button, the thickness of the
lines, what pattern do you chose for the selection, etcetera. I had a lot of feedback
from users like: I want this, this is too big and at some point you want to please
everybody and you don’t make choices. I don’t think that you are so busy with
what others think.
Are there many other users of the program?

Not that I know of (laughter). I know that there is at least one other
person that actually used it to produce a booklet. So I know that it is
79

possible for someone other than myself to make things with it. I’ve gotten
a couple of patches from people to not make it crash at various places but
since Laidout is quite small, I can just not pay any attention to criticism.
Partially because there isn’t any, and I have particular motivations to make
it work in a certain way and so it is easier to just go forward.

I think people that want to use your program are probably happy with this
kind of visualisation. Because you wrote it alone, there is also a consistency across
the program. It is not like Scribus, that has parts written by a lot of people so you
can really recognize: this is Craig (Bradney), this is Andreas (Vox), this is Jean
(Ghali), this is myself. There is nothing to follow.
I remember Donald Knuth talking about TeX and he was saying that
the entire program was written from scratch three times before its current
incarnation. I am sympathetic to that style of programming.
Start again.
I think it is a good idea, to start again. To come back to a little detail. Is
there a fileformat for your imposition tool, to store the imposition plan? Is it a
text or a binary format?

It is text-based, an indented file format, sort of like Python. I did
not want to use XML, every time I try to use XML there are all these
greater thans and less thans. It is better than binary, but it is still a huge
mess. When everything is indented like a tree, it is very easy to find things.
The only problem is to always input tabs, not spaces. I have two different
imposition types, basically, the flat-folding sheets and the three dimensional
ones. The three dimensional one is a little more complicated.
If you read the file, do you know what you are folding?

Not exactly. It lists what folds exists. If you have a five by five grid, it
will say Fold along this line, over in such and such direction. What it actually
translates to in the end, is not currently stored in the file. Once you are in
Laidout you can export into a PodofoImpose plan file.
Is this file just values, or are there keywords, is it like a text?
80

I try to make it pretty readable, like trimright or trimleft.
Does it talk about turning pages? This I find beautiful in PodofoImpose
plans, you can almost follow the paper through the hands of the program.
Turn now, flip backwards, turn again. It is an instruction for a dance.
Pretty much.

The text you can read in the PodofoImpose plans was taken from what Ludi
and me did by hand. One of us was folding the paper, and the other was writing
it into the plan. I think a lot of the things we talk about, are putting things from
the real world into the computer. But you are putting things from the computer
into the real world.
Can you describe again these two types of imposition, the first one being
very familiar to us. It must be the most frequently asked question on the
Scribus mailing list: How to do imposition. Even the most popular search
term on the OSP website is ‘Bookletprinting’. But what is the difference with
the plan for a 3D object? A classic imposition plan is also somehow about
turning a flat surface into a three dimensional object?
It is almost translatable. I’m reworking the 3D version to be able to
incorporate the flat folding. It is not quite there yet, the problem is the
connection between the pages. Currently, in the 3D version, you have a
shape that has a definitive form and that controls how things bleed across
the edges. When you have a piece of paper for a normal imposition, the
pages that are next to each other in the physical form are not necessarily
related to each other at all in the actual piece of paper. Right now, the piece
of paper you use for the 3D model is very defined, there is no flexibility.
Give me a few months!
So it is very different actually.

It is a different approach. One person wanted to do flexagons, it is sort
of like origami I guess, but it is not quite as complicated. You take a piece
of paper, cut out a square and another square, and than you can fold it and
you end up with a square that is actually made up of four different sections.
Than you can take the middle section, and you get another page and you can
81

keep folding in strange ways and you get different pages. Now the question
becomes: how do you define that page, that is a collection of four different
chunks of paper? I’m working on that!
We talk about the move from 2D to 3D as if these pages are empty. But
you actually project images on them and I keep thinking about maps, transitional objects where physical space is projected on paper which then becomes a
second real space and so on. Are you at all interested in maps?
A little bit. I don’t really want to because it is such a well-explored
field already. Already for many hundreds of years the problem is how do
you represent a globe onto a more or less two dimensional surface. You
have to figure out a way to make globe gores or other ways to project it and
than glue it on to a ball for example. There is a lot of work done with that
particular sort of imagery, but I don’t know.
Too many people in the field!

Yes. One thing that might be interesting to do though is when you have
a ball that is a projection surface, then you can do more things, like overlays
onto a map. If you want to simulate earthquakes for example. That would
be entertaining.
And the panoramic images you make, do you use special equipment for
this?

For the first couple that I made, I made this 30-sided polyhedron that
you could mount a camera inside and it sat on a base in a particular way so
you could get thirty chunks of images from a really cheap point and shoot
camera. You do all that, and you have your thirty images and it is extremely
laborious to take all these thirty images and line them up. That is why I
made the 3D portion of Laidout, it was to help me do that in an easier
fashion. Since then I’ve got a fish-eyed lens which simplifies things quite
considerably. Instead of spending ten hours on something, I can do it in ten
minutes. I can take 6 shots, and one shot up, one shot down. In Hugin you
can stitch them all together.

And the kinds of things you photograph? We saw the largest rodent on
earth? How do you pick a spot for your images?
82

I am not really sure. I wander around and than photograph whatever
stands out. I guess some unusual configuration of architecture frequently
or sometimes a really odd event, or a political protest sometimes. The trick
with panoramas is to find an area where something is happening all over
the globe. Normally, on sunny days, you take a picture and all your image
is blank. As pretty as the blue sky is, there is not a lot going on there
particularly.
Panoramic images are usually spherical or circular. Do you take certain
images with a specific projection surface in mind?
To an extent. I take enough images. Once I have a whole bunch of
images, the task is to select a particular image that goes with a particular
shape. Like cubes there are few lines and it is convenient to line them up to
an actual rectangular space like a room. The tetrahedron made out of cones,
I made one of Mount St. Helens, because I thought it was an interesting
way to put the two cones together. You mentioned 3D printers earlier, and
one thing I would like to do is to extend the panoramic image to be more
like a progression. For most panoramic images, the focal point is a single
point in space. But when you walk along a trail, you might have a series of
photographs all along. I think it could be an interesting work to produce,
some kind of ellipsoidal shape with a panoramic image that flows along the
trail.
Back to Laidout, and keeping with the physical and the digital. Would
there be something like a digital papercut?
Not really. Maybe you can have an Arduino and a knife?
I was more imagining a well placed crash?

In a sense there is. In the imposition view, right now I just have a green
bar to tell where the binding is. However when you do a lot of folds, you
usually want to do a staple. But if you are stapling and there is not an actual
fold there, than you are screwed.

83

The following statements were recorded by Urantsetseg
Ulziikhuu (Urana) in 2014. She studied communication in
Istanbul and Leuven and joined Constant for a few months
to document the various working practices at Constant
Variable. Between 2011 and 2014, Variable housed studios
for Artists, Designers, Techno Inventors, Data Activists,
Cyber Feminists, Interactive Geeks, Textile Hackers, Video
Makers, Sound Lovers, Beat Makers and other digital creators who were interested in using F/LOS software for
their creative experiments.

Why do you think people should use and or practice
Open Source software? What is in it for you?
Urantsetseg Ulziikhuu

The knitting machine that I am using normally has a
computer from the eighties. Some have these scanners that are really old
and usually do not work anymore. They became obsolete. If it wasn’t for
Open Source, we couldn’t use these technologies anymore. Open Source
developers decided that they should do something about these machines and
found that it was not that complicated to connect these knitting machines
directly to computers. I think it is a really good example how Open Source
is important, because these machines are no longer produced and industry
is no longer interested in producing them again, and they would have died
without further use.
The idea that Open Source is about sharing is also important. If you try to
do everything from zero, you just never advance. Now with Open Source, if
somebody does something and you have access to what they do, and you can
take it further and take it into a different direction.

Claire Williams

99

I haven’t always used Open Source software. It started
at the Piet Zwart Institute where there was a decision made by Matthew
Fuller and Femke Snelting who designed the program. They brought a
bunch of people together that asked questions about how our tools influence
practice, how they are used. And so, part of my process is then teaching in
that program, and starting to use Free Software more and more. I should
say, I had already been using one particular piece of Free Software which
is FFmpeg, a program that lets you work with video. So there again there
was a kind of connection. It was just by the virtue of the fact that it was
one of the only tools available that could take a video, pull out frames,
work with lots of different formats, just an amazing tool. So it started with
convenience. But the more that I learned about the whole kind of approach
of Open Source, the more Open Source I started to use. I first switched from
MacOSX to maybe Dual Booting and now indeed I am pretty much only
using Open Source. Not exclusively Open Source, because I occasionally use
platforms online that are not free, and some applications.
I am absolutely convinced that when you use these tools, you are learning
much more about inner workings of things, about the design decisions that
go into a piece of software so that you are actually understanding at a very
deep level, and this then lets you move between different tools. When
tools change, or new things are offered, I think it is really a deep learning
that helps you for the future. Whereas if you just focus on the specific
particularities of one platform or piece of software, that is a bit fragile and
will inevitably be obsolete when a software stops being developed or some
kind of new kind of way of working comes about.
Michael Murtaugh

I use Open Source software every day, as I have
Debian on my laptop. I came to it through anarchism – I don’t have a tech
background – so it’s a political thing mainly. Not that F/LOSS represents
a Utopian model of production by any means! As an artist it fits in with
my interest in collaborative production. I think the tools we use should be
malleable by the people who use them. Unfortunately, IT education needs
to improve quite a lot before that ideal becomes reality.
Politically, I believe in building a culture which is democratic and malleable
by its inhabitants, and F/LOSS makes this possible in the realm of software.
The benefits as a user are not so great unless you are tech-savvy enough to
really make use of that freedom. The software does tend to be more secure
Eleanor Greenhalgh

100

and so on, though I think we’re on shaky ground if we try to defend F/LOSS
in terms of its benefits to the end user. Using F/LOSS has a learning curve,
challenges which I put up with because I believe in it socially. This would
probably be a different answer from say, a sysadmin, someone who could see
really concrete benefits of using F/LOSS.
Actually I came from Open Content and alternative licensing to the technical side of using GNU/Linux. My main motivation
right now is the possibility to develop a deeper relationship with my tools.
For me it is interesting to create my own tools for my work, rather than
to use something predefined. Something everyone else uses. With Free
Software this is easier – to invent tools. Another important point is that
with Free Software and open standards it’s more likely that you will be able
to keep track of your work. With proprietary software and formats, you are
pretty much dependent on decisions of a software company. If the company
decides that it will not continue an application or format, there is not much
you can do about it. This happened to users of FreeHand. When Adobe
acquired their competitor Macromedia they decided to discontinue the development of FreeHand in favour of their own product Illustrator. You can
sign a petition, but if there is no commercial interest, most probably nothing
will happen. Let’s see what happens to Flash.

Christoph Haag

I studied sculpture, which is a very solitary way of working. Already through my studies, this idea of an artist sitting around in a
studio somewhere, being by himself, just doing his work by himself, didn’t
make sense to me. It is maybe true for certain people, but it is definitely
not true to me today, the person I am. I always integrated other people into
my work, or do collaborative work. I don’t really care about this ‘it is my
work’ or ‘it is your work’, if you do something together, at some point the
work exists by itself. For me, that is the greatest moment, it is just independent. It actually rejoins the authorship question, because I don’t think
you can own ideas. You can kind of put them out there and share them.
It is organic, like things that can grow and that they will become bigger
and bigger, become something else that you couldn’t have ever thought. It
makes the horizon much bigger. It is a different way of working I guess.
The obvious reason is that it is free, but the sharing philosophy is really at
the core of it. I have always thought that when you share things, you do not
Christina Clar

101

get back things instantly, but you do get so much things in another way,
not in the way you expect. But if you put in a idea out, use tools that are
open and change them, put them out again. So there is lot of back and
forth of communication. I think that is super important. It is the idea of
evolving together, not just by ourselves. I really do believe that we do evolve
much quicker if we are together than everybody trying to do things by his
or herselves. I think it is very European idea to get into this individualism,
this thinking of idea of doing things by myself, my thing. But I think we
can learn a lot from Asia, just ways of doing, because there community is
much more important.
I don’t necessarily develop like software or codes, because I am not a software developer. But I would say, I am involved in
analog way. I do use Open Source software, although I have to say I do not
much with computers. Most of my work is analog. But I do my researches
on the website. I am a user.
I started to develop an antipathy against large corporations, operating systems or softwares, and started to look for alternatives. Then you come to the
Linux system and Ubuntu which has a very user-friendly interface. I like the
fact that behind the software that I am using, there is a whole community,
who are until now without major financial interests and who develop tools
for people like me. So now I am totally into Open Source software, and I
try to use as much as I can. So my motivation would be I want to get off
the track of big corporates who will always kind of lead you into consuming
more of their products.
John Colenbrander

What does Free Culture mean to you? Are you taking
part in a ‘Free Culture Movement’?
Urantsetseg Ulziikhuu

Michael Murtaugh I’d like to think so, but I realised of that it is quite
hard. Only now, I am seriously trying to really contribute back to projects
and I wouldn’t even say that I am an active contributer to Free Software
projects. I am much more of a user and part of the system. I am using it in
my teaching and my work, but now I try to maybe release software myself in
some way or I try to create projects that people could actually use. I think

102

it is another kind of dimension of engagement. I haven’t really fully realised
it, so yes for that question if I am contributing to Free Culture. Yes, but I
could go lot deeper.
John Colenbrander I am a big supporter of the idea of Free Culture. I
think information should be available for people, especially for those who
have little access to information. I mean we live in the West and we have
access to information more or less with physical libraries and institutions
where we can go. Specially in Asia, South America, Africa this is very
important. There is a big gap between those who have access to knowledge
and those don’t have access to knowledge.
That’s a big field to explore to be able to open up information to people who
have very poor access to information. Maybe they are not even able to write
or read. That’s already is a big handicap. So I think it is a big mission in
that sense.

Could Free Culture be seen as an opposition to commercialism?
Urantsetseg Ulziikhuu

Michael Murtaugh It is a tricky question. I think no matter what, if you
go down the stack, in terms of software and hardware, if you get down to
the deepest level of a computer then there is little free CPU design. So I
think it is really important to be able to work in this kind of hybrid spaces
and to be aware of then how free Free is, and always look for alternatives
when they are available. But to a certain degree, I think it is really hard to
go for a total absolute. Or it is a decision, you can go absolute but that may
mean that you are really isolated from other communities. So that’s always
a bit of balancing act, how independent can you be, how independent you
want to be, how big does your audience need to be, or you community needs
to be. So that’s a lot of different decisions. Certainly, when I am working
in the context of an art school with design practitioners, you know it is not
always possible to really go completely independent and there are lots of
implications in terms of how you work and whom you can work with, and
the printers you can work with. So it is always a little bit of trade-off, but it
is important to understand what the decisions are.

103

Eleanor Greenhalgh I think the idea of a Free Culture movement is very
exciting and important. It has always gone on, but stating it in copyrightaware terms issues an important challenge to the ‘all rights reserved’ statusquo. At the same time I think it has limitations, at least in its current form.
I’m not sure that rich white kids playing with their laptops is necessarily a
radical act. The idea and the intention are very powerful though, because
it does have the potential to challenge the way that power – in the form of
‘intellectual property’ – is distributed.
Christoph Haag Copyright has become much more enforced over the last
years than it was ever before. In a way, culture is being absorbed by companies trying to make money out of it. And Free Culture developed as a
counter movement against this. When it comes to mainstream culture, you
are most often reduced to a consumer of culture. Free Culture then is a
obvious reaction. The idea of culture where you have the possibility to engage again, to become active and create your version, not just to consume
content.

How could Open Source software be economically sustainable, in a way that is beneficial for both developers/creators and users?
Urantsetseg Ulziikhuu

Eleanor Greenhalgh That’s a good question! A very hard one. I’m not
involved enough in that community to really comment on its economic future. But it does, to me, highlight what is missing from the analysis in
Free Culture discourse, the economic reality. It depends on where they (developers) work. A lot of them are employed by companies so they get a
salary. Others do it for a hobby. I’d be interested to get accurate data on
what percentage of F/LOSS developers are getting paid, etc. In the absence
of that data, I think it’s fair to say it is an unsolved problem. If we think
that developers ‘should’ be compensated for their work, then we need to talk
about capitalism. Or at least, about statutory funding models.

104

It is interesting that you used both ‘sustainability’ and
‘economic viability’. And I think those are two things very often in opposition. I am doing a project now about publishing workflows and future electronic publishing forums. And that was the one thing we looked at. There
were several solutions on the market. One was a platform called ‘Editorial’
which was a very nice website that you could use to mark down texts collaboratively and and then it could produce ePub format books. After about
six months of running, it closed down as many platforms do. Interestingly,
in their sign-off message it said: You have a month to get your stuff out of the
website, and sorry we have decided not to Open Source the project. As much as
we loved making it, it was just too much work for us to keep this running. In
terms of real sustainability, Open Source of course would have allowed them
to work with anybody, even if it is just a hobby.
Michael Murtaugh

It is very related to passion of doing these things.
Embroidering machines have copyrighted softwares installed. The software
itself is very expensive, around 1000 , and the software for professionals is
6000 to buy. Embroidering machines are very expensive themselves too.
These softwares are very tight and closed, you even have to have special USB
key for patterns. And there are these two guys who are software developers,
they are trying to come up with a format which all embroidering machines
could read. They take their time to do this and I think in the end if the
project works out, they will probably get attention and probably get paid
also. Because instead of giving 1000 to copyrighted software, maybe you
would be happy to give 50 to these people.
Claire Williams

105

Date: Thu, 12 Sep 2013 15:50:25 +0200
From: FS
To: OSP

Dear OSP,

For a long time I have wanted to organise a conversation with you
about the place and meaning of distributed version control in OSP
design work. First of all because after three years of working with
Git intensely, it is a good moment to take stock. It seems that many
OSP methods, ideas and politics converge around it and a conversation discussing OSP practice linked to this concrete (digital) object
could produce an interesting document; some kind of update on what
OSP has been up to over the last three years and maybe will be in
the future. Second: Our last year in Variable has begun. Under the
header Etat des Lieux, Constant started gathering reflections and documents to archive this three year working period. One of the things
I would like to talk about is the parallels and differences between a
physical studio space and a distributed workflow. And of course I am
personally interested in the idea of ‘versions’ linked to digital collaboration. This connects to old projects and ideas and is sparked again
by new ones revived through the Libre Graphics Research Unit and
of course Relearn.
I hope you are also interested in this, and able to make time for it. I
would imagine a more or less structured session of around two hours
with at least four of you participating, and I will prepare questions
(and cake).
Speak soon!
xF

109

How do you usually explain Git to design students?
Before using Git, I would work on a document. Let’s say a layout, and to
keep a trace of the different versions of the layout, I would append _01, _02
to the files. That’s in a way already versioning. What Git does, is that it
makes that process somehow transparent in the sense that, it takes care of
it for you. Or better, you have to make it take care for you. So instead of
having all files visible in your working directory, you put them in a database,
so you can go back to them later on. And then you have some commands to
manipulate this history. To show, to comment, to revert to specific versions.
More than versioning your own files, it is a tool to synchronize your work
with others. It allows you to work on the same projects together, to drive
parallel projects.
It really is a tool to make collaboration easier. It allows you to see differences.
When somebody proposes you a new version of a file, it highlights what has
changed. Of course this mainly works on the level of programming code.
Did you have any experience with Git before working with OSP?
Well, not long before I joined OSP, we had a little introduction to Mercurial,
another versioning software, at school in 2009. Shortly after I switched to
Git. I was working with someone else who was working with Git, and it was
so much better.
Alex was interested in using Git to make Brainch 1 . We wanted to make a web
application to fork texts that are not code. That was our first use of Git.
I met OSP through Git in a way. An intern taught me the program and he
said: Eric once you’ll get it, you’ll get so excited!. We were in the cafeteria of
the art school. I thought it was really special, like someone was letting me
in on a secret and we we’re the only ones in the art school who knew about
it. He thought me how to push and pull. I saw quickly how Git really
is modeled on how culture works. And so I felt it was a really interesting,
promising system. And then I talked about it at the Libre Graphics Meeting
in 2010, and so I met OSP.
1

A distributed text editing platform based on Django and Git http://code.dyne.org/brainch

110

I started to work on collaborative, graphic design related stuff when I was
developing a font manager. I’ve been connected to two versioning systems
and mainly used SVN. Git came well after, it was really connected to web
culture, compared to Subversion, which is more software related.
What does it mean that Git is referred to as ‘distributed versioning’?
The first command you learn in Git, is the clone command. It means that
you make a copy of a project that is somehow autonomous. Contrary to
Subversion you don’t have this server-client architecture. Every repository
is in itself a potential server and client. Meaning you can keep track of your
changes offline.
At some point, you decided to use ‘distributed versioning’ rather than a
centralized system such as Subversion. I remember there was quite some
discussion ...
I was not hard to convince. I had no experience with other versioning
systems. I was just excited by the experience that others had with this new
tool. In fact there was this discussion, but I don’t remember exactly the
arguments between SVN or Git. For what I remember Git was easier.
The discussion was not really on the nature of this tool. It was just: who
would keep Git running for OSP? Because the problem is not the system in
itself, it’s the hosting platform. We didn’t find any hosted platform which
fitted our taste. The question was: do we set up our own server, and who is
going to take care of at. At this time Alex, Steph and Ivan were quite excited
about working with Git. And I was excited to use Subversion instead, but I
didn’t have to time to take care of setting it up and everything.
You decided not to use a hosted platform such as Gitorious or GitHub?
I guess we already had our own server and were hosting our own projects. But
Pierre you used online platforms to share code?
When I started developing my own projects it was kind of the end of
SourceForge. 2 I was looking for a tool more in the Free Software tradition.
2

SourceForge is a web based source code repository. It was the first platform to offer this
service for free to Open Source projects.

111

There was gna, and even though the platform was crashing all the time, I
felt it was in line with this purpose.
If I remember correctly, when we decided between Git and Subversion,
Pierre, you were also not really for it because of the personality of its main
developer, Linus Torvalds. I believe it was the community aspect of Git that
bothered you.

Well Git has been written to help Linus Torvalds receive patches for the
Linux kernel; it is not aimed at collaborative writing. It was more about
making it convenient for Linus. And I didn’t see a point in making my
practice convenient for Linus. I was already using Subversion for a while
and it was really working great at providing an environment to work together with a lot of people and check out different versions. Anything you
expect from a versioning system was there, all elements for collaborative
work were there. I didn’t see the point to change for something that didn’t
feel as comfortable with, culturally. This question of checking out different
directories of repositories was really important to me. At this time (Git has
evolved a lot) it was not possible to do that. There were other technical
aspects I was quite keen of. I didn’t see why to go for Git which was not
offering the same amount of good stuff.

But then there is this aspect of distribution, and that’s not in Subversion.
If some day somebody decides to want a complete copy of an OSP project,
including all it’s history, they would need to ask us or do something complicated to give it to them.

I was not really interested in this ‘spreading the whole repository’. I was
more concerned about working together on a specific project.

It feels like your habit of keeping things online has shifted. From making
an effort afterwards to something that happens naturally, as an integral
part of your practice.

It happened progressively. There is this idea that the Git repository is linked
to the website, which came after. The logic is to keep it all together and
linked, online and alive.
112

That’s not really true ... it was the dream we had: once we have Git, we
share our files while working on them. We don’t need to have this effort
afterwards of cleaning up the sources and it will be shareable. But it is not
true. If we do not put an effort to make it shareable it remains completely
opaque. It requires still an investment of time. I think it takes about 10%
of time of the project, to make it readable from the outside afterwards.

Now, with the connection to our public website, you’re more conscious that all
the files we use are directly published. Before we had a Git web application that
allowed someone to just browse repositories, but it was not visual, so it was hard
to get into it. The Cosic project is a good example. Every time I want to show
the project to someone, I feel lost. There are so many files and you really don’t
know which ones to open.

Maybe, Eric, you can talk about ‘Visual Culture’?

Basically ‘Visual Culture’ is born out of this dream I talked about just now.
That turns out not to be true, but shapes our practice and helps us think
about licensing and structuring and all those interesting questions. I was
browsing through this Git interface that Stéphanie described, and thought
it was a missed opportunity, because here is this graphic design studio,
who publishes all their works, while they are working. Which has all kind
of consequences but if you can’t see it, if you don’t know anything about
computer programming, you have no clue on what’s going on. And also,
because it’s completely textual. And for example a .sla file, if you don’t know
about Open Source, if you don’t know about Scribus it could as well be
salad. It is clear that Git was made for text. It was the idea to show all the
information that is already there in a visual form. But an image is an image,
and type is a typeface, and it changes in a visual way. I thought it made
sense for us to do. We didn’t have anyone writing posts on our blog. But
we had all this activity in the Git repository.
It started to give some schematic view on our practice, and renders the current
activity visible, very exciting. But it is also very frustrating because we have lots
of ideas and very little time to implement them. So the ‘Visual Culture’ project
is terribly late on the ball comparing to our imagination.
113

Take by example the foundry. Or the future potential of the ‘Iceberg’ folders. Or
our blog that is sometimes cruelly missing. We have ways to fill all these functions
with ‘Visual Culture’ but still no time to do it!
In a way you follow established protocols on how Open Source code is
usually published. There should be a license, a README file ... But OSP
also decided to add a special folder, which you called ‘Iceberg’. This is a
trick to make your repository more visual?

Yeah, because even if something is straightforward to visualise, it helps if
you can make a small render of it. But most of the files are a accumulation
of files, like a webpage. The idea is that in the ‘Iceberg’ folder, we can put a
screenshot, or other images ...

We wanted the files that are visible, to be not only the last files added. We wanted
to be able to show the process. We didn’t want it to be a portfolio and just show
the final output. But we wanted to show errors and try-outs. I think it’s not only
related to Git, but also to visual layout. When you want to share software, we
say release early, release often, which is really nice. But it’s not enough to just
release, because you need to make it accessible to other people to understand what
they are reading. It’s like commenting your code, making it ... I don’t want to
say ‘clean’ ... legible, using variable names that people can understand. Because,
sometimes when we code just for ourselves I use French variables so that I’m sure
that it’s not word-protected by the programming language. But then it is not
accessible to many people. So stuff like that.
You have decided to use a tool that’s deeply embedded in the world of
F/LOSS. So I’ve always seen your choice for Git both as a pragmatic
choice as well as a fan choice?

Like as fans of the world of Open Source?

Yes. By using this tool you align yourself, as designers, with people that
develop software.

I’m not sure, I join Pierre on his feelings towards Linus Torvalds, even
though I have less anger at him. But let’s say he is not someone I especially
114

like in his way of thinking. What I like very much about Git is the distributed aspect. With it you can collaborate without being aligned together.
While I think Linus Torvalds idea is very liberal and in a way a bit sad, this
idea that you can collaborate without being aligned, without going through
this permission system, is interesting. With Scribus for example, I never
collaborated on it, it’s such a pain to got through the process. It’s good and
bad. I like the idea of a community which is making a decision together, at
the same time it is so hard to enter this community that you just don’t want
to and give up.
How does it feel, as a group of designer-developers, to adopt workflows,
ways of working, and also a vocabulary that comes from software development?

On the one hand it’s maybe a fan act. We like this movement of F/LOSS
development which is not always given the importance it has in the cultural
world. It’s like saying hey I find you culturally relevant and important. But
there’s another side to it. It’s not just a distant appropriation, it’s also the fact
that software development is such a pervasive force. It’s so much shaping
the world, that I feel I also want to take part in defining what are these
procedures, what are these ways of sharing, what are these ways of doing
things. Because I also feel that if I ask someone from another field as
a cultural actor, and take and appropriate these mechanisms and ways of
doing, I will be able to influence what they are. So there is the fan act, and
there’s also the act of trying to be aware of all the logic contained in these
actions.

And from another side, in the world of graphic design it is also a way to
affirm that we are different. And that we’re really engaged in doing this
and not only about designing nice pictures. That we really develop our own
tools.

It is a way to say: hey, we’re not a kind of politically engaged designers with
a different political goal each next half month, and than we do a project
about it. It really impacts our ecosystem, we’re serious about it.
115

It’s true that, before we started to use Git, people asked: So you’re called
Open Source Publishing, but where are your sources? For some projects you
could download a .zip file but it was always a lot of trouble, because you needed
to do it afterwards, while you were already doing other projects.

Collaboration started to become a prominent part of the work; working
together on a project. Rather than, oh you do that and when you are finished
you send the file over and I will continue. It’s really about working together on
a project. Even if you work together in the same space, if you don’t have a
system to share files, it’s a pain in the ass.
After using it for a few years, would you say there are parts of in Git
where you do not feel at home?

In Git, and in versioning systems in general, there is that feeling that the
latest version is the best. There is an idea of linearity, even though you can
have branches, you still have an idea of linearity in the process.

Yes, that’s true. We did this workshop Please computer let me design, the first
time was in a French school, in French, and the second time for a more European
audience, in English. We made a branch, but then you have the default branch the English one - you only see that one, while they are actually on the same level.

So the convention is to always show the main branch, the ‘master’?

In a way there is no real requirement in Git to have a branch called ‘master’.
You can have a branch called ‘English’ and a branch called ‘French’. But
it’s true that all the visualization software we know (GitHub or Gitorious
are ways to visualize the content of a Git repository), you’ll need to specify
which is the branch that is shown by default. And by default, if you don’t
define it, it is ‘master’.
For certain types of things such as code and text it works really well, for
others, like you’re making a visual design, it’s still very hard to compare
differences. If I make a poster for example I still make several files instead of
branches, so I can see them together at once, without having to check-out
another branch. Even in websites, if I want to make a layout, I’ll simply make
a copy of the HTML and CSS, because I want to be able to test out and
116

compare them. It might be possible with branches, it’s just to complicated.
Maybe the tools to visualize it are not there ... But it’s still easier to make
copies and pick the one you like.

It’s quite heavy to go back to another version. Also working collaboratively is
actually quite heavy. For example in workshops, or the ‘Balsamine’ project ... we
were working together on the same files at the same time, and if you want to share
your file with Git you’ll have to first add your file, then commit and pull and
push, which is four commands. And every time you commit you have to write
a message. So it is quite long. So while we were working on the .css for ‘Visual
Culture’, we tried it in Etherpad, and one of us was copying the whole text file
and committing.

So you centralized in the end.

It’s more about third-party visual software. Let’s say Etherpad for example,
it’s a versioning system in itself. You could hook into Git through Etherpad
and each letter you type could be a commit. And it would make nonsense
messages but at the same time it would speed up the process to work together. We can imagine the same thing with Git (or any other collaborative
working system) integrated into Inkscape. You draw and every time you save
... At some point Subversion was also a WebDav server, it means that for
any application it was possible to plug things together. Each time you would
save you file it would make a commit on the server. It worked pretty well
to bring new people into this system because it was just exactly the same as
the OpenOffice, it was an open WebDav client. So it was possible to say to
OpenOffice that you, where you save is a disk. It was just like saving and it
was committing.

I really agree. From the experience of working on a typeface together in
Git with students, it was really painful. That’s because you are trying to
do something that generates source code, a type design program generates
source code. You’re not writing it by hand, and if you then have two versions
of the type design program, it already starts to create conflicts that are quite
hard. It’s interesting to bring to models together. Git is just an architecture
on how to start your version, so things could hook into it.
117

For example with Etherpad, I’ve looked into this API the other day, and
working together with Git, I’m not sure if having every Etherpad revision
directly mapped to a Git revision would makes sense if you work on a project
... but at the same time you could have every saved revision mapped to a
Git revision. It’s clear Git is made for asynchronous collaboration process.
So there is Linus in his office, there are patches coming in from different
people. He has the time also to figure out which patch needs to go where.
This doesn’t really work for the Etherpad-style-direct-collaboration. For
me it’s cool to think about how you could make these things work together.
Now I’m working on this collaborative font editor which does that in some
sort of database. How would that work? It would not work if every revision
would be in the Git. I was thinking you could save, or sort of commit, and
that would put it in a Git repository, this you can pull and push. But if
you want to have four people working together and they start pulling, that
doesn’t work on Git.

I never really tried Sparkleshare, that could maybe work? Sparkleshare is making
a commit message every time you save a document. In a way it works more like
Dropbox. Every time you save it’s synchronized with the server directly.

So you need to find a balance between the very conscious commits you
make with Git and the fluidness of Etherpad, where the granularity is
much finer. Sparkleshare would be in between?
I think it would be interesting to have this kind of Sparkleshare behaviour, but
only when you want to work synchronously.

So you could switch in and out of different modes?

Usually Sparkleshare is used for people who don’t want to get to much involved
in Git and its commands. So it is really transparent: I send my files, it’s synchronized. I think it was really made for this kind of Dropbox behaviour. I think
it would make sense only when you want to have your hands on the process. To
have this available only when you decide, OK I go synchronous. Like you say,
if you have a commit for every letter it doesn’t make sense.
It makes sense. A lot of things related to versions in software development
is meant to track bugs, to track programming choices.
118

I don’t know for you ... but the way I interact with our Git repository since we
started to work with it ... I almost never went into the history of a project. It’s
just, it really never happened to go back into this history, to check out an old
version.

I do!

Some neat feature of Git is the dissect command. To find where it broke.

You can top from an old revision that you know that works and then track
down, like checkout, track down the bug.

Can you give a concrete example, where that would be useful, I mean,
not in code.

Not code, okay. That I don’t know.

In a design, like visual design, I think it never happens. It happens on websites,
on tools. Because there is a bug, so you need to come back to see where it broke.
But for a visual design I’m not sure.

It’s true, also because as you said before, with .svg files or .sla files we often
have several duplicates. I sometimes checkout those. But it’s true it’s often
related to merge problems. Or something, you don’t know what to do, so
you’ll just check-out, to go back to an earlier version.

It would be interesting for me to really look at our use of Git and map some
kind of tool on top of a versioning system. Because it’s not even versioning,
it is also a collaborative workflow, and to see what we mean. Just to use
maybe some feature of Git or whatever to provide the services we need and
really see what we exactly work with. And, this kind of thing where we
want to see many versions at the same time, to compare seems important.
Well it’s the kind of thing that could take advantage of a versioning system,
to build.
It is of course a bit strange that if you want to see different versions next
to each other you have to go back in time. It’s a kind of paradox, no?

But then you can’t see them at the same time
Exactly, no.

119

Because there is no way to visualize your trip back in history.

Well I think, something you could all have some interesting discussion
about, is the question of exchange. Because now we are talking about the
individual. We’ve talked how it’s easier to contribute to Git based projects
but to be accepted into an existing repository someone needs to say okay,
I want it, which is like SVN. What is easier, is to publish you’re whole
Git repository online, with the only difference from the the first version,
is that you added your change, but it means that in proposing a change
you are already making a new cultural artifact. You’re already putting a new
something there. I find this to be a really fascinating phenomena because
it has all kinds of interesting consequences. Of course we can look at it
the way of, it’s the cold and the liberal way of doing things. Because the
individual is at the center of this, because you are on your own. It’s your
thing in the first place, and then you can see if it maybe becomes someone
else’s thing too. So that has all kinds of coldness about it and it leads to
many abandoned projects and maybe it leads to a decrease of social activity
around specific projects. But there’s also an interesting part of it, where it
actually resembles quite well how culture works in the first place. Because
culture deals with a lot redundancy, in the sense that we can deal with many
kinds of very similar things. We can have Akzidenz Grotesk, Helvetica and
the Akkurat all at the same time, and they have some kind of weird cultural
lineage thing going on in between them.

Are there any pull requests for OSP?
We did have one.

Eric is right to ask about collaboration with others, not only how to work
internally in a group.

That’s why GitHub is really useful. Because it has the architecture to exchange
changes. Because we have our own server it’s quite private, it’s really hard to
allow anyone to contribute to fonts for example. So we had e-mails: Hey here’s
a new version of the font, I did some glyphs, but also changed the shape of
the A. There we have two different things, new glyphs is one thing, we could say
120

we take any new glyph. But changing the A, how do you deal with this? There’s
a technical problem, well not technical ...

An architectural problem?

Yeah, we won’t add everyone’s SSH-key to the server because it will be endless
to maintain. But at the same time, how do you accept changes? And then, who
decides what changes will be accepted?

For the foundry we decided to have a maintainer for each font project.

It’s the kind of thing we didn’t do well. We have this kind of administrative
way of managing the server. Well it’s a lot of small elements that all together
make it difficult. Let’s say at some point we start to think maybe we need to
manage our repositories, something a bit more sophisticated then Gitolite. So we
could install something like Gitorious. We didn’t do it but we could imagine
to rebuild a kind of ecosystem where people have their own repositories and
do anything we can imagine on this kind of hosting service. Gitorious is a
Free Software so you can deploy it on your own server. But it is not trivial
to do.
Can you explain the difference between Gitorious and GitHub?

Gitorious is first a free version, it’s not a free version of Git but GitHub. One
is free and one is not.
Meaning you can not install GitHub on your own server.

Git is a storage back-end, and Gitorious or GitHub are a kind of web application to interact with the repository and to manage them. And GitHub
is a program and a company deploying these programs to offer both a commercial service and a free-of-charge service. They have a lot of success with
the free service Git in a sense. And they make a lot of money at providing
the same service, exactly the same, just it means that you can have private
space on the server. It’s quite convenient, because the tools are really good
to manage repositories. And Gitorious I don’t exactly know what is their
business model, they made all their source code to run the platform Free
Software. It means they offer a bit less fancy features.
121

A bit less shiny?

Yeah, because they have less success and so less money to dedicate to development of the platform. But still it’s some kind of easy to grasp web interface
management, repositories manager. Which is quite cool. We could do that,
to install this kind of interface, to allow more people to have their repositories on the OSP-server. But here comes the difficult thing: we would need
a bit more resources to run the server to host a lot of repositories. Still this
moment we have problems sometimes with the server because it’s not like
a large server. Nobody at OSP is really a sysadmin, and has time to install
and setup everything nicely etcetc. And we also would have to work on the
gitorious web application to make it a bit more in line with our visual universe. Because now it’s really some kind of thing we cannot associate with
really.

Do you think ‘Visual Culture’ can leverage some of the success of GitHub?
People seem to understand and like working this way.

Well, it depends. We also meet a lot of people who come to GitHub and say,
I don’t understand, I don’t understand anything of this! Because of it’s huge
success GitHub can put some extra effort in visualization, and they started
to run some small projects. So they can do more than ‘Visual Culture’ can
do.
And is this code available?

Some of their projects are Open Source.

Some of their projects are free. Even if we have some things going on in
‘Visual Culture’, we don’t have enough manpower to finalize this project.
The GitHub interface is really specific, really oriented, they manage to do
things like show fonts, show pictures, but I don’t think they can display
.pdf. ‘Visual Culture’ is really a good direction, but it can become obsolete
by the fact that we don’t have enough resource to work on it. GitHub starts
to cover a lot of needs, but always in their way of doing things, so it’s a
problem.
122

I’m very surprised ... the quality of Git is that it isn’t centralized, and nowadays everything is becoming centralized in GitHub. I’m also wondering
whether ... I don’t think we should start to host other repositories, or maybe
we should, I don’t know.
Yeah, I think we should

You do or you don’t want to become a hosting platform?

No. What I think is nice about GitHub is of course the social aspect around
sharing code. That they provide comments. Which is an extra layer on top
of Git. I’m having fantasies about another group like OSP who would use
Git and have their own server, instead of having this big centralized system.
But still have ways to interact with each other. But I don’t know how.
It would be interesting if it’s distributed without being disconnected.

If it was really easy to setup Git, or a versioning server, that would be
fantastic. But I can remember, as a software developer, when I started to
look for somewhere to host my code it was no question to setup my own
server. Because of not having time, no time to maintain, no time to deploy
etcetc. At some point we need hosting-platforms for ourselves. We have
almost enough to run our own platform. But think of all the people who
can’t afford it.
But in a way you are already hosting other people’s projects. Because
there are quite a few repositories for workshops that actually not belong
to you.

Yeah, but we moved some of them to GitHub just to get rid of the pain of
maintaining these repositories.
We wanted the students to be independent. To really have them manage
their own projects.

GitHub is easier to manage then our own repository which is still based on
a lot of files.
123

For me, if we ever make this hosting platform, it should be something else then
our own website. Because, like you say, it’s kind of centralized in the way we use
it now. It’s all on the Constant server.

Not anymore?

No, the Git repositories are still on the Constant server.

Ah, the Git is still. But they are synced with the OSP server. But still, I can
imagine it would be really nice to have many instances of ‘Visual Culture’
for groups of people running their own repositories.
It feels a bit like early days of blogging.

It would be really, really nice for us to allow other people to use our services.
I was also thinking of this, because of this branching stuff. For two reasons,
first to make it easier for people to take advantage of our repository. Just
like branching our repository would be one click, just like in Gitorious or
GitHub. So I have an account and I like this project and I want to change
something, I just click on it. You’re branched into your own account and
you can start to work with it. That’s it, and it would be really convenient
for people who would like to work with our font files etc. And once we
have all these things running on our server we can think of a lot of ideas to
promote our own dynamic over versioning systems. But now we’re really a
bit stuck because we don’t have the tools we would like to have. With the
repositories, it’s something really rigid.
It is interesting to see the limits of what actually can happen. But it is
still better than the usual (In)design practices?

We would like to test GitMX. We don’t know much about it, but we would
like to use it for the pictures in high-resolution, .pdfs. We thought about it
when we were in Seoul, because we were putting pictures on a gallery, and
we were like ah, this gallery. We were wondering, perhaps if GitMX works
well, perhaps it can be separated into different types of content. And then
we can branch them into websites. And perhaps pictures of the finalized
work. In the end we have the ‘Iceberg’ with a lot of ‘in-progress’-pictures,
124

but we don’t have any portfolio or book. Again because we don’t care much
about this, but at the end we feel we miss it a bit.

A narration ...

... to have something to present. Each time we prepare a presentation, we
need to start again to find back the tools and files, and to choose what we
want to send for the exhibition.

It’s really important because at some point, working with Git, I can remember telling people ...
Don’t push images!
I remember.

The repository is there to share the resources. And that’s really where it
shines. And don’t try to put all your active files in it. At some point we miss
this space to share those files.
But an image can be a recipe. And code can be an artifact. For me the
difference is not so obvious.

It is not always so clear. Sometimes the cut-off point is decided by the weight of
the file, so if it is too heavy, we avoid Git. Another is: if it is easy to compile, leave
it out of Git. Sometimes the logic is reversed. If we need it to be online even if
not a source, but simply we need to share it, we put it on the Git. Some commits
are also errors. The distinction is quite organic until now, in my experience. The
closer the practice gets to code, the more clean the versioning process is.

There is also a kind of performative part of the repository. Where a
commit counts as a proof of something ...
When I presented the OSP’s website, we had some remarks like, ah it’s good we
can see what everybody has done, who has worked.

But strangely so far there were not many reactions from partners or clients
regarding the fact that all the projects could be followed at any stage. Even budget
wise ... Mostly, I think, because they do not really understand how it works.
And sometimes it’s true, it came to my mind, should we really show our website
to clients? Because they can check whether we are working hard, or this week
125

we didn’t do shit ... And it’s, I think it’s really based on trust and the type of
collaboration you want with your client. Actually collaboration and not a hierarchical relationship. So I think in the end it’s something that we have to work
on. On building a healthy relationship, that you show the process but it’s not
about control. The meritocracy of commits is well known, I think, in platforms
like GitHub. I don’t think in OSP this is really considered at all actually.

It supports some self-time tracking that is nuanced and enriched by e-mail,
calendar events, writing in Etherpads. It gives a feeling of where is the activity
without following it too closely. A feeling rather than surveillance or meritocracy.

I know that Eric ... because he doesn’t really keep track of his working hours. He
made a script to look into his commit messages to know when he worked on a
project. Which is not always truthful. Because sometimes you make a commit on
some files that you made last week, but forgot to commit. And a commit is a
text message at a certain time. So it doesn’t tell you how much time you spent on
the file.

Although in the way you decided to visualize the commits, there is a sense
of duration between the last and the commit before. So you have a sense
of how much time passed in between. Are there ways you sometimes
trick the system, to make things visible that might otherwise go missing?
In the messages sometimes, we talk about things we tried and didn’t work.
But it’s quite rare.

I kind of regret that I don’t write so much on the commits. At the beginning
when we decided to publish the messages on the homepage we talked about
this theater dialogue and I was really excited. But in the end I see that I
don’t write as much as I would like.
I think it’s really a question of the third-party programs we use. Our
messages are like a dialogue on the website. But when you write
a commit message you’re not at all in this interface. So you don’t answer
to something. If we would have the same kind of interface we have on the
website, you would realize you can answer to the previous commit message.
You have this sort of narrative thread and it would work. We are in the
commit

126

middle, we have this feeling of a dialogue on one side, but because when
you work, you’re not on the website to check the history. It’s just basically, it
would be about to make things really in line with what we want to achieve.
I commit just when I need to share the files with someone else. So I wait
until the last moment.

To push you mean?

No, to commit. And then I’ve lost track of what I’ve done and then I just
write ...

But it would be interesting, to look at the different speeds of collaboration. They might need each another type of commit message.

But it’s true, I must admit that when I start working on a project I don’t read the
last messages. And so, then you lose this dialogue as you said. Because sometimes
I say, Ludi is going to work on it. So I say, OK Ludi it’s your turn now,
but the thing is, if she says that to me I would not know because I don’t read the
commit messages.

I suppose that is something really missing from the Git client. When you
you update your working copy to synchronize with the server it just
says files change, how many changes there were. But doesn’t give you the
story.
pull,

That’s what missing when you pull. It should instead of just showing which files
have changed, show all the logs from the last time you pulled.

Your earlier point, about recipes versus artifacts. I have something to add
that I forgot. I would reverse the question, what the versioning system
considers to be a recipe is good, is a recipe. I mean, in this context ‘a
recipe’ is something that works well within the versioning system. Such as
the description of your process to get somewhere. And I can imagine it’s
something, I would say the Git community is trying to achieve that fact.
Make it something that you can share easily.

But we had a bit of this discussion with Alex for a reader we made. It is going to
be published, so we have the website with all the texts, and the texts are all under
127

a free license. But the publisher doesn’t want us to put the .pdfs online. I’m quite
okay with that, because for me it’s a condition that we put the sources online. But
if you really want the .pdf then you can clone the repository and make them
yourself in Scribus. It’s just an example of not putting the .pdf, but you have
everything you need to make the .pdf yourself. For me it’s quite interesting to say
our sources are there. You can buy the book but if you want the .pdf you have
to make a small effort to generate it and then you can distribute it freely. But I
find it quite interesting to, of course the easiest way would be the .pdf but in this
case we can’t. Because the publisher doesn’t want us to.

But that distinction somehow undervalues the fact that layout for example
is not just an executed recipe, no? I mean, so there is this kind of grey
area in design that is ... maybe not the final result, but also not a sort of
executable code.
We see it with ‘Visual Culture’, for instance, because Git doesn’t make it easy
to work with binaries. And the point of ‘Visual Culture’ is to make .jpegs
visible and all the kind of graphical files we work with. So it’s like we don’t
know how to decide whether we should put for instance .pdfs in the Git
repository online. Because on the one hand it makes it less manageable with
Git to work with. But on the other hand we want to make things visible on
the website.
But it’s also storage-space. If you want to clone it, if you want people to clone
it also you don’t want a 8 gigabyte repository.

I don’t know because it’s not really what OSP is for, but you can imagine, like
Dropbox has been made to easily share large files, or even files in general.
We can imagine that another company will set up something, especially
graphic designers or the graphic industry. The way GitHub did something
for the development industry. They will come up with solutions for this
very problem.
I just want to say that I think because we’re not a developer group, at the start the
commit messages were a space where you would throw all your anger, frustration.
And we first published a Git log in the Balsamine program, because we saw that.
This was the first program we designed with ConTeXt. So we were manipulating
128

code for layout. The commit messages were all really funny, because Pierre and
Ludi come from a non-coding world and it was really inspiring and we decided
to put it in the publication. Then we kind of looked, Ludi says two kind of bad
things about the client, but it was okay. Now I think we are more aware that it’s
public, we kind of pay attention not to say stuff we don’t mean to ...

It’s not such an exciting space anymore as in the first half year?

It often very formal and not very, exciting, I think. But sometimes I put
quite some effort to just make clear what I’m trying to share.

And there are also commits that you make for yourself. Because sometimes, even
if you work on a project alone, you still do a Git project to keep track, to have a
history to come back to. Then you write to yourself. I think it’s also something
else. I’ve never tried it.

It’s a lot to ask in a way, to write about what you are doing while you are
doing it.

I think we should pay more attention to the first commit of a project, and
the last. Because it’s really important to start the story and to end it. I speak
about this ‘end’ because I feel overflowed by all these not-ended projects, I’m
quite tired of it. I would like us to find a way to archive projects which are
not alive any more. To find a good way to do it. Because the list of folders
is still growing, and in a way it is okay but a lot of projects are not active.

But it’s hard to know when is the last commit. With the Balsamine project it’s
quite clear, because it’s season per season. But still, we never know when it is the
last one. The last one could be solved by the ‘Iceberg’, to make the last snapshots
and say okay now we make the screenshots of the latest version. And then you close
it ... We wanted that the last one was Hey, we sent the .pdfs to the printer.
But actually we had to send it back another time because there was a mistake.
And then the log didn’t fit on the page anymore.

129

At the Libre Graphics Meeting 2008, OSP sat down with
Chris Lilley on a small patch of grass in front of the
Technical University in Wroclaw, Poland. Warmed up by
the early May sun, we talked about the way standards are
made, how ‘specs’ influence the work of designers, programmers and managers and how this process is opening up to voices from outside the W3C. Chris Lilley is
trained as a biochemist, and specialised in the application
of biological computing. He has been involved with the
World Wide Web Consortium since the 1990s, headed the
Scalable Vector Graphics (SVG) working group and currently looks after two W3C activity areas: graphics, including PNG, CGM, graphical quality, and fonts, including font formats, delivery, and availability of font software.
I would like to ask you about the way standards are made ... I think there’s a
relation between the way Free, Libre and Open Source software works, and
how standards work. But I am particularly interested in your announcement
in your talk today that you want to make the process of defining the SVG
standard a public process?
Right. So, there’s a famous quote that says that standards are like sausages.
Your enjoyment of them is improved by not knowing how they’re made. 1
And to some extent, depending on the standards body and depending on
what you’re trying to standardize, the process can be very messy. If you
were to describe W3C as a business proposition, it has got to fail. You’re
taking companies who all have commercial interests, who are competing and
you’re putting them in the same room and getting them to talk together and
agree on something. Oddly, sometimes that works! You can sell them the
idea that growing the market is more important and is going to get them
more money. The other way ... is that you just make sure that you get the
managers to sign, so that their engineers can come and discuss standards,

1

Laws are like sausages. It’s better not to see them being made. Otto von Bismarck, 1815–1898

135

and then you get the engineers to talk and the managers are out of the way.
Engineers are much more forthcoming, because they are more interested in
sharing stuff because engineers like to share what they’re doing, and talk
on a technical level. The worst thing is to get the managers involved, and
even worse is to get lawyers involved. W3C does actually have all those
three in the process. Shall we do this work or not is a managerial level that’s
handled by the W3C advisory committee, and that’s where some people
say No, don’t work on that area or We have patents or This is a bad idea or
whatever. But often it goes through and then the engineers basically talk
about it. Occasionally there will be patents disclosed, so the W3C also has
a process for that. The first things are done are the ‘charters’. The charter
says what the group is going to work on a broad scope. As soon as you’ve got
your first draft, that further defines the scope, but it also triggers what it’s
called an exclusion opportunity, which basically gives the companies I think
ninety days to either declare that they have a specific patent and say what it’s
number is and say that they exclude it, or not. And if they don’t, they’ve just
given a royalty-free licence to whatever is needed to implement that spec.
The interesting thing is that if they give the royalty-free licence they don’t
have to say which patents they’re licencing. Other standards organizations
build up a patent portfolio, and they list all these patents and they say what
you have to licence. W3C doesn’t do that, unless they’ve excluded it which
means you have to work around it or something like that. Based on what
the spec says, all the patents that have been given, are given. The engineers
don’t have to care. That’s the nice thing. The engineers can just work away,
and unless someone waves a red flag, you just get on with it, and at the end
of the day, it’s a royalty-free specification.
But if you look at the SVG standard, you could say that it’s been quite a
bumpy road 2 ... What kind of work do you need to do to make a successful
standard?

Firstly, you need to agree on what you’re building, which isn’t always firm
and sometimes it can change. For example, when SVG was started the idea
was that it would be just static graphics. And also that it would be animated
2

http://ospublish.constantvzw.org/news/whos-afraid-of-adobe-not-me-says-the-mozillafoundation

136

using scripts, because with dynamic HTML and whatever, this was ’98, we
were like: OK, we’re going to use scripting to do this. But when we put it
out for a first round of feedback, people were like No! No, this is not good
enough. We want to have something declarative. We don’t want to have to write
a script every time we want something to move or change color. Some of the
feedback, from Macromedia for example was like No, we don’t think it should
have this facility, but it quickly became clear why they were saying that and
what technology they would rather use instead for anything that moved or
did anything useful ... We basically said That’s not a technical comment, that’s
a marketing comment, and thank you very much.

Wait a second. How do you make a clear distinction between marketing and
technical comments?

People can make proposals that say We shouldn’t work on this, we shouldn’t
work on that, but they’re evaluated at a technical level. If it’s Don’t do it
like that because it’s going to break as follows, here I demonstrate it then that’s
fine. If they’re like Don’t do it because that competes with my proprietary
product then it’s like Thanks for the information, but we don’t actually care.
It’s not our problem to care about that. It’s your problem to care about
that. Part of it is sharing with the working group and getting the group
to work together, which requires constant effort, but it’s no different from
any sort of managerial or trust company type thing. There’s this sort of
encouragement in it that at the end of the day you’re making the world a
better place. You’re building a new thing and people will use it and whatever.
And that is quite motivating. You need the motivation because it takes a lot
longer than you think. You build the first spec and it looks pretty good and
you publish it and you smooth it out a bit, put it out for comments and you
get a ton of comments back. People say If you combine this with this with this
then that’s not going to work. And you go Is anyone really going to do that? But
you still have to say what happens. The computer still has to know what
happens even if they do that. Ninety percent of the work is after the first
draft, and it’s really polishing it down. In the W3C process, once you get
to a certain level, you take it to what is euphemistically called the ‘last call’.
This is a term we got from the IETF. 3 It actually means ‘first call’ because

3

The Internet Engineering Task Force, http://www.ietf.org/

137

you never have just one. It’s basically a formal round of comments. You log
every single comment that’s been made, you respond to them all, people can
make an official objection if you haven’t responded to the comment correctly
etcetera. Then you publish a list of what changes you’ve made as a basis of
that.

What part of the SVG standardization process would you like to make public?

The part that I just said has always been public. W3C publishes specifications on a regular basis, and these are always public and freely available.
The comments are made in public and responded to in public. What hasn’t
been public has been the internal discussions of the group. Sometimes it
can take a long time if you’ve got a lot of comments to process or if there’s a
lot of argumentation in the group: people not agreeing on the direction to
go, it can take a while. From the outside it looks like nothing is happening.
Some people like to follow this at a very detailed level, and blog about it,
and blablabla. Overtime, more and more working groups have become public. The SVG group just recently got recharted and it’s now a public group.
All of its minutes are public. We meet for ninety minutes twice a week on
a telephone call. There’s an IRC log of that and the minutes are published
from that, and that’s all public now. 4

Could you describe such a ninety minute meeting for us?

There are two chairs. I used to be the chair for eight years or so, and then
I stepped down. We’ve got two new chairs. One of them is Erik Dahlström
from Opera, and one of them is Andrew Emmons from Bitflash. Both
are SVG implementing companies. Opera on the desktop and mobile, and
Bitflash is just on mobile. They will set out an agenda ahead of time and
say We will talk about the following issues. We have an issue tracker, we have
an action tracker which is also now public. They will be going through the
actions of people saying I’m done and discussing whether they’re actually
done or not. Particular issues will be listed on the agenda to talk about
and to have to agree on, and then if we agree on it and you have to change
the spec as a result, someone will get an action to change that back to the
4

Scalable Vector Graphics (SVG) Feedback Page:
http://www.w3.org/Graphics/SVG/feedback.html

138

spec. The spec is held into CVS so anyone in the working group can edit
it and there is a commit log of changes. When anyone accidentally broke
something or trampled onto someone else’s edit, or whatever - which does
happen - or if it came as the result of a public comment, then there will be
a response back saying we have changed the spec in the following way ... Is
this acceptable? Does this answer your comment?
How many people do take part in such a meeting?

In the working group itself there are about 20 members and about 8 or
so who regularly turn up, every week for years. You know, you lose some
people over time. They get all enthusiastic and after two years, when you
are not done, they go off and do something else, which is human nature.
But there have been people who have been going forever. That’s what you
need actually in a spec, you need a lot of stamina to see it through. It is a
long term process. Even when you are done, you are not done because you’ve
got errata, you’ve got revisions, you’ve got requests for new functionalities
to make it into the next version and so on.

On the one hand you could say every setting of a standard is a violent process,
some organisation forcing a standard upon others, but the process you describe
is entirely based on consensus.

There’s another good quote. Tim Berners Lee was asked why W3C works
by consensus, rather than by voting and he said: W3C is a consensus-based
organisation because I say so, damn it. 5 That’s the Inventor of the Web,
you know ... (laughs) If you have something in a spec because 51% of the
people thought it was a good idea, you don’t end up with a design, you end
up with a bureaucratic type decision thing. So yes, the idea is to work by
consensus. But consensus is defined as: ‘no articulated dissent’ so someone
can say ‘abstain’ or whatever and that’s fine. But we don’t really do it on
a voting basis, because if you do it like that, then you get people trying to
5

Consensus is a core value of W3C. To promote consensus, the W3C process requires Chairs
to ensure that groups consider all legitimate views and objections, and endeavor to resolve
them, whether these views and objections are expressed by the active participants of the
group or by others (e.g., another W3C group, a group in another organization, or the general
public). World Wide Web Consortium. General Policies for W3C Groups, 2005. [Online; accessed 30.12.2014]

139

make voting blocks and convince other people to vote their way ... it is much
better when it is done on the basis of a technical discussion, I mean ... you
either convince people or you don’t.
If you read about why this kind of work is done ... you find different arguments. From enhancing global markets to: ‘in this way, we will create a
better world for everyone’. In Tim Berners-Lee’s statements, these two are
often mixed. If you for example look at the DIN standards, they are unambiguously put into the world as to help and support business. With Web
Standards and SVG, what is your position?

Yes. So, basically ... the story we tell depends on who we are telling it to and
who is listening and why we want to convince them. Which I hope is not as
duplicitous as it may sound. Basically, if you try to convince a manager that
you want 20% time of an engineer for the coming two years, you are telling
them things to convince them. Which is not untrue necessarily, but that is
the focus they want. If you are talking to designers, you are telling them how
that is going to help them when this thing becomes a spec, and the fact that
they can use this on multiple platforms, and whatever. Remember: when
the web came out, to exchange any document other than plain text was extremely difficult. It meant exchanging word processor formats, and you had
to know on what platform you were on and in what version. The idea that
you might get interoperability, and that the Mac and the PC could exchange
characters that were outside ASCII was just pie in the sky stuff. When we
started, the whole interoperability and cross-platform thing was pretty novel
and an untested idea essentially. Now it has become pretty much solid. We
have got a lot of focus on disabled accessibility, and also internationalization
which is if you like another type of accessibility. It would be very easy for
an organisation like W3C, which is essentially funded by companies joining it, and therefore they come from technological countries ... it would be
very easy to focus on only those countries and then produce specifications
that are completely unusable in other areas of the world. Which still does
sometimes happen. This is one of the useful things of the W3C. There is
the internationalization review, and an accessibility review and nowadays also
a mobile accessible review to make sure it does not just work on desktops.
Some organisations make standards basically so they can make money. Some
140

of the ISO 6 standards, in particular the MPEG group, their business model
is that you contribute an engineer for a couple of years, you make a patent
portfolio and you make a killing off licencing it. That is pretty much to keep
out the people who were not involved in the standards process. Now, W3C
takes quite an opposite view. The Royalty-Free License 7 for example, explicitly says: royalty-free to all. Not just the companies who were involved
in making it, not just companies, but anyone. Individuals. Open Source
projects. So, the funding model of the W3C is that members pay money,
and that pays our salaries, basically. We have a staff of 60 odd or so, and
that’s where our salaries come from, which actually makes us quite different
from a lot of other organisations. IETF is completely volunteer based so
you don’t know how long something is going to take. It might be quick, it
might be 20 years, you don’t know. ISO is a national body largely, but the
national bodies are in practice companies who represent that nation. But in
W3C, it’s companies who are paying to be members. And therefore, when
it started there was this idea of secrecy. Basically, giving them something
for their money. That’s the trick, to make them believe they are getting
something for their money. A lot of the ideas for W3C came from the
X Consortium 8 actually, it is the same people who did it originally. And
there, what the meat was ... was the code. They would develop the code and
give it to the members of the X Consortium three months before the public
got it and that was their business benefit. So that is actually where our ‘three
month rule’ comes from. Each working group can work for three months
but then they have to go public, have to publish. ‘The heartbeat rule’, we
call it now. If you miss several heartbeats then you’re dead. But at the same
time if you’re making a spec and you’re growing the market then there’s a
need for it to be implemented. There’s an implementation page where you
encourage people to implement, you report back on the implementations,
6
7
8

International Standards for Business, Government and Society International Organization for
Standardization (ISO), http://www.iso.org
Overview and Summary of W3C Patent Policy
http://www.w3.org/2004/02/05-patentsummary.html
The purpose of the X Consortium was to foster the development, evolution, and maintenance of the
X Window System, a comprehensive set of vendor-neutral, system-architecture neutral,
network-transparent windowing and user interface standards.
http://www.x.org/wiki/XConsortium

141

you make a test suite, you show that every feature in the spec that there’s
a test for ... at least two implementations pass it. You’re not showing that
everyone can use it at that stage. You’re showing that someone can read the
spec and implement it. If you’ve been talking to a group of people for four
years, you have a shared understanding with them and it could be that the
spec isn’t understandable without that. The implementation phase lets you
find out that people can actually implement it just by reading the spec. And
often there are changes and clarifications made at that point. Obviously one
of the good ways to get something implemented is to have Open Source
people do it and often they’re much more motivated to do it. For them it’s
cool when it is new, If you give me this new feature it’s great we’ll do it rather
than: Well that doesn’t quite fit into our product plans until the next quarter
and all that sort of stuff. Up until now, there hasn’t really been a good way
for the Open Source people to get involved. They can comment on specs
but they’re not involved in the discussions. That’s something we’re trying
to change by opening up the groups, to make it easier for an Open Source
group to contribute on an ongoing basis if they want to. Right from the
beginning part, to the end where you’re polishing the tiny details in the
corner.
I think the story of web fonts shows how an involvement of the Open Source
people could have made a difference.

When web fonts were first designed, essentially you had Adobe and Apple
pushing one way, Bitstream pushing the other way, both wanting W3C to
make their format the one and only official web format, which is why you
ended up with a mechanism to point to fonts without saying what format
was required. And than you had the Netscape 4, which pointed off to a
Bitstream format, and you had IE4 which pointed off to this Embedded
Open Type (EOT) format. If you were a web designer, you had to have two
different tools, one of which only worked on a Mac, and one of which only
worked on PC, and make two different fonts for the same thing. Basically
people wouldn’t bother. As Håkon 9 mentioned the only people who do
actually use that right now really, are countries where the local language
9

Håkon Wium Lie proposed Cascading Style Sheets (CSS) in 1994.
http://www.w3.org/People/howcome/

142

is not well provided for by the Operating Systems. Even now, things like
WindowsXP and MacOSX don’t fully support some of the Indian languages.
But they can get it into web pages by using these embedded fonts. Actually
the other case where it has been used a lot, is SVG, not so much on the
desktop though it does get used there but on mobiles. On the desktop
you’ve typically got 10 or 20 fonts and you got a reasonable coverage. On a
mobile phone, depending on how high or low ended it is, you might have
a single font, and no bold, and it might even be a pixel-based font. And
if you want to start doing text that skews and swirls, you just can’t do that
with a pixel-based font. So you need to download the font with the content,
or even put the font right there in the content just so that they can see
something.
I don’t know how to talk about this, but ... envisioning a standard before
having any concrete sense of how it could be used and how it could change the
way people work ... means you also need to imagine how a standard might
change, once people start implementing it?
I wouldn’t say that we have no idea of how it’s going to work. It’s more a
case that there are obvious choices you can make, and then not so obvious
choices. When work is started, there’s always an idea of how it would fit in
with a lot of things and what it could be used for. It’s more the case that
you later find that there are other things that you didn’t think of that you
can also use it for. Usually it is defined for a particular purpose and than
find that it can also do these other things.

Isn’t it so that sometimes, in that way, something that is completely marginal,
becomes the most important?

It can happen, yes.

For me, SVG is a good example of that. As I understood it, it was planned
to be a format for the web. And as I see it today, it’s more used on the
desktop. I see that on the Linux desktop, for theming, most internals are
using SVG. We are using Inkscape for SVG to make prints. On the other
hand, browsers are really behind.
143

Browsers are getting there. Safari has got reasonably good support. Opera
has got very good support. It really has increased a lot in the last couple
of years. Mozilla Firefox less so. It’s getting there. They’ve been at it
for longer, but it also seems to be going slower. The browsers are getting
there. The implementations which I showed a couple of days ago, those
were mobile implementations. I was showing them on a PC, but they were
specially built demos. Because they’re mobile, it tends to move faster.

But you still have this problem that Internet Explorer is a slow adopter.

Yes, Internet Explorer has not adopted a lot of things. It’s been very slow
to do CSS. It hasn’t yet done XHTML, although it has shipped with an
XML parser since IE4. It hasn’t done SVG. Now they’ve got their own
thing ... Silverlight. It has been very hard to get Microsoft on board and
getting them doing things. Microsoft were involved in the early part of
SVG but getting things into IE has always been difficult. What amazes me
to some extent, is the fact that it’s still used by about 60-70% of people.
You look at what IE can do, and you look at what all the other browsers
can do, and you wonder why. The thing is ... it is still a break and some
technologies don’t get used because people want to make sure that everyone
can see them. So they go down to the lowest common denominator. Or
they double-implement. Implement something for all the other browsers,
and implement something separate for IE, and than have to maintain two
different things in parallel, and tracking revisions and whatever. It’s a nightmare. It’s a huge economic cost because one browser doesn’t implement the
right web stuff. (laughing, sighing)

My question would be: what could you give us as a kind of advice? How
could we push this adoption where we are working? Even if it only is the
people of Firefox to adopt SVG?

Bear in mind that Firefox has this thing of Trunk builds and Branch builds
and so on. For example when Firefox 3 came out, well the Beta is there.
Suddenly there’s a big jump in the SVG stuff because all the Firefox 2 was
on the same branch as 1.5, and the SVG was basically frozen at that point.
The development was ongoing but you only saw it when 3 came out. There
were a bunch of improvements there. The main missing features are the
144

animation and the web fonts and both of those are being worked on. It’s
interesting because both of those were on Acid 3. Often I see an acceleration
of interest in getting something done because there’s a good test. The Acid
Test 10 is interesting because it’s a single test for a huge slew of things all at
once. One person can look at it, and it’s either right or it’s wrong, whereas
the tests that W3C normally produces are very much like unit tests. You
test one thing and there’s like five hundred of them. And you have to go
through, one after another. There’s a certain type of person who can sit
through five hundred test on four browsers without getting bored but most
people don’t. There’s a need for this sort of aggregative test. The whole
thing is all one. If anything is wrong, it breaks. That’s what Acid is designed
to do. If you get one thing wrong, everything is all over the place. Acid 3
was a submission-based process and like a competition, the SVG working
group was there, and put in several proposals for what should be in Acid 3,
many of which were actually adopted. So there’s SVG stuff in Acid 3.

So ... who started the Acid Test?

Todd Fahrner designed the original Acid 1 test, which was meant to exercise
the tricky bits of the box-model in CSS. It ended like a sort Mondrian
diagram, 11 red squares, and blue lines and stuff. But there was a big scope
for the whole thing to fall apart into a train wreck if you got anything
wrong. The thing is, a lot of web documents are pretty simple. They got
paragraphs, and headings and stuff. They weren’t exercising very much the
model. Once you got tables in there, they were doing it a little bit more. But
it was really when you had stuff floated to one side, and things going around
or whatever, and that had something floated as well. It was in that sort of
case where it was all breaking, where people wouldn’t get interoperability.
It was ... the Web Standards Project 12 who proposed this?
Yes, that’s right.
10
11
12

The Acid 3 test: http://acid3.acidtests.org is comprehensive in comparison to more detailed,
but fragmented SVG tests:
http://www.w3.org/Graphics/SVG/WG/wiki/Test_Suite_Overview#W3C_Scalable_Vector_Graphics_.28SVG.29_Test
Acid Test Gallery http://moonbase.rydia.net/mental/writings/box-acid-test/
The Web Standards Project is a grassroots coalition fighting for standards which ensure simple,
affordable access to web technologies for all http://www.webstandards.org/

145

It didn’t come from a standards body.

No, it didn’t come from W3C. The same for Acid 2, Håkon Wium Lie was
involved in that one. He didn’t blow his own trumpet this morning, but
he was very much involved there. Acid 3 was Ian Hickson, who put that
together. It’s a bit different because a lot of it is DOM scripting stuff. It
does something, and then it inquires in the DOM to see if it has been done
correctly, and it puts that value back as a visual representation so you can
see. It’s all very good because apparently it motivates the implementors to
do something. It’s also marketable. You can have a blog posting saying we
do 80% of Acid Test. The public can understand that. The people who are
interested can go Oh, that’s good.
It becomes a mark of quality.

Yes, it’s marketing. It’s like processor speed in PCs and things. There are
so much technology in computers, so than what do you market it on? Well
it’s got that clock speed and it’s got this much memory. OK, great, cool.
This one is better than that one because this one’s got 4 gigs and that one’s
got 2 gigs. It’s a lot of other things as well, but that’s something that the
public can in general look at and say That one is better. When I mentioned
the W3C process, I was talking about the engineers, managers. I didn’t talk
about the lawyers, but we do have a process for that as well. We have a patent
advisory group conformed. If someone has made a claim, and it’s disputed
then we can have lawyers talking among themselves. What we really don’t
have in that is designers, end-users, artists. The trick is to find out how to
represent them. The CSS working group tried to do that. They brought in
a number of designers, Jeff Veen 13 and these sort of people were involved
early on. The trouble is that you’re speaking a different language, you’re
not speaking their language. When you’re having weekly calls ... Reading a
spec is not bedtime reading, and if you’re arguing over the fine details of a
sentence ... (laughing) well, it will put you to sleep straight away. Some of
the designers are like: I don’t care about this. I only want to use it. Here’s what
I want to be able to do. Make it that I can do that, but get back to me when it’s
done.
13

Jeff Veen was a designer at Wired magazine, in those days.
http://adaptivepath.com/aboutus/veen.php

146

That’s why the idea of the Acid Test is a nice breed between the spec and
the designer. When I was seeing the test this morning, I was thinking
that it could be a really interesting work to do, not to really implement it
but to think about with the students. How would you conceive a visual
test? I think that this could be a really nice workshop to do in a university
or in a design academy ...
It’s the kind of reverse-reverse engineering of a standard which could help
you understand it on different levels. You have to imagine how wild you
can go with something. I talk about standards, and read them - not before
going to bed - because I think that it’s interesting to see that while they’re
quite pragmatic in how they’re put together, but they have an effect on the
practice of, for example, designers. Something that I have been following with
interest is the concept of separating form and content has become extremely
influential in design, especially in web design. Trained as a pre-web designer,
I’m sometimes a bit shocked by the ease with which this separation is made.

That’s interesting. Usually people say that it’s hard or impossible, that you
can’t ever do it. The fact that you’re saying that it’s easy or that it comes
naturally is interesting to me.

It has been appropriated by designers as something they want. That’s why it’s
interesting to look at the Web Standards Project where designers really fight
for a separation of content and form. I think that this is somehow making
the work of designers quite ... boring. Could you talk a bit about how this is
done?
It’s a continuum. You can’t say that something is exactly form or exactly
presentation because there are gradations. If you take a table, you’ve already
decided that you want to display the material in a tabular way. If it’s a real
table, you should be able to transpose it. If you take the rows and columns,
and the numbers in the middle then it should still work. If you’ve got
‘sales’ here and if you’ve got ‘regions’ there, then you should still be able to
transpose that table. If you’re just flipping it 90 degrees then you are using
it as a layout grid, and not as a table. That’s one obvious thing. Even then,
deciding to display it as a tabular thing means that it probably came from a
much bigger dataset, and you’ve just chosen to sum all of the sales data over
147

one year. Another one: you have again the sales data, you could have it as pie
chart, but you could also have it as a bar chart, you could have it in various
other ways. You can imagine that what you would do is ship some XML
that has that data, and then you would have a script or something which
would turn it into an SVG pie chart. And you could have a bar chart, or you
could also say show me only February. That interaction is one of the things
that one can do, and arguably you’re giving it a different presentational form.
It’s still very much a gradation. It’s how much re-styleability remains. You
can’t ever have complete separation. If I’m describing a company, and [1]
I want to do a marketing brochure, and [2] I want to do an annual report
for the shareholders, and [3] I want to do an internal document for the
engineering team. I can’t have the same content all over those three and just
put styling on it. The type of thing I’m doing is going to vary for those
audiences, as will the presentation. There’s a limit. You can’t say: here’s the
überdocument, and it can be styled to be anything. It can’t be. The trick is
to not mingle the style of the presentation when you don’t need to. When
you do need to, you’re already halfway down the gradient. Keep them as far
apart as you can, delay it as late as possible. At some point they have to be
combined. A design will have to go into the crafting of the wording, how
much wording, what voice is used, how it’s going to fit with the graphics
and so on. You can’t just slap random things together and call it design,
it looks like a train wreck. It’s a case of deferment. It’s not ever a case of
complete separation. It’s a case of deferring it and not tripping yourself up.
Just simple things like bolds and italics and whatever. Putting those in as
emphasis and whatever because you might choose to have your emphasized
words done differently. You might have a different font, you might have a
different way of doing it, you might use letter-spacing, etc. Whereas if you
tag that in as italics then you’ve only got italics, right? It’s a simple example
but at the end of the day you’re going to have to decide how that is displayed.
You mentioned print. In print no one sees the intermediate result. You see
ink on paper. If I have some Greek in there and if I’ve done that by actually
typing in Latin letters on the keyboard and putting a Greek font on it and
out comes Greek, nobody knows. If it’s a book that’s being translated, there
might be some problems. The more you’re shipping the electronic version
around, the more it actually matters that you put in the Greek letters as
148

Greek because you will want to revise it. It matters that you have flowing
text rather than text that has been hand-ragged because when you put in
the revisions you’re going to have to re-rag the entire thing or you can just
say re-flow and fix it up later. Things like that.

The idea of time, and the question of delay is interesting. Not how, but when you
enter to fine-tune things manually. As a designer of books, you’re always facing
the question: when to edit, what, and on what level. For example, we saw this
morning 14 that the idea of having multiple skins is really entering the publishing
business, as an idea of creativity. But that’s not the point, or not the complete
point. When is it possible to enter the process? That’s something that I think we
have to develop, to think about.

The other day there was a presentation by Michael Dominic Kostrzewa 15
that shocked me. He is now working for Nokia, after working for Novell
and he was explaining how designers and programmers were fighting each
other instead of fighting the ‘real villain’, as he said, who were the managers. What was really interesting was how this division between content
and style was also recouping a kind of political or socio-organizational divide within companies where you need to assign roles, borders, responsibilities to different people. What was really frightening from the talk was
that you understood that this division was encouraging people not to try
and learn from each other’s practice. At some point, the designer would
come to the programmer and say: In the spec, this is supposed to be like this
and I don’t want to hear anything about what kind of technical problems you
face.
Designers as lawyers!

Yes ... and the programmer would say: OK, we respect the spec, but then
we don’t expect anything else from us. This kind of behaviour in the end,
blocks a lot of exchange, instead of making a more creative approach
possible.
14
15

Andy Fitsimon: Publican, the new Open Source publishing tool-chain (LGM 2008)
http://media.river-valley.tv/conferences/lgm2008/quicktime/0201-Andy_Fitzsimon.html
Michael Dominic Kostrzewa. Programmers hell: working with the UI designer (LGM 2008)

149

I read about (and this is before skinning became more common) designers
doing some multimedia things at Microsoft. You had designers and then
there were coders. Each of them hated the other ones. The coders thought
the designers were idiots who lived in lofts and had found objects in their
ears. The designers thought that the programmers were a bunch of socially
inept nerds who had no clue and never got out in sunlight and slept in their
offices. And since they had that dynamic, they would never explain to each
other ( ... )
(policeman arrives)

POLICEMAN:
Do you speak English?

Yes.

POLICEMAN:
You must go from this place because there’s a conference.

Yes, we know. We are part of this conference (shows LGM badge).

POLICEMAN:
We had a phone call that here’s a picnic. I don’t really see a picnic ...

We’re doing an interview.

POLICEMAN:
It looks like a picnic, and professors are getting nervous. You must go sit
somewhere else. Sorry, it is the rules. Have a nice day!

150

At the Libre Graphics Meeting 2008, OSP picks up a conversation that Harrison allegedly started in a taxi in Montreal, a year
earlier. We meet font designer and developer Dave Crossland
in a noisy food court to speak about his understanding of the
intertwined histories of typography and software, and the master in type design at the Department of Typography at the
University of Reading. Since the interview, a lot has happened.
Dave finished his typeface Cantarell and moved on to consult
the Google Web Fonts project, commissioning new typefaces
designed for the web. He is also currently offering lectures on
typeface design with Free Software.
Harrison (H)

1, 2.

Ludivine Loiseau (LL)
and now all:
Dave Crossland (DC)

Hello Dave.

Hellooo ...

Alright!

Well, thank you for taking a bit of time with us for the interview. First
thing is maybe to set a kind of context of your situation, your current situation.
What you’ve done before. Why are you setting fonts and these kind of things.
H

Oh yes, yeah. Well, I take it quite far back, when I was a teenager. I
was planning to do computer science university studying like mathematics
and physics in highschool. I needed some work experience. I decided I
didn’t want to work with computers. So I dropped maths and physics and
I started working at ... I mean I started studying art and design, and also
socio-linguistics in highschool. I was looking at going to Fine Arts but I
wasn’t really too worried about if I could get a job at the end of it, because
I could get a job with computers, if I needed to get a job So I studied that
at my school for like a one year course, after my school. A foundation year,
and the deal with that is that you study all the different art and design disciplines. Because in highschool you don’t really have the specialities where you
specifically study textile or photography, not every school has a darkroom,
schools are not well equipped.
DC

155

You get to experience all these areas of design and in that we studied graphic
design, motion graphics and I found in this a good opportunity to bring together the computer things with fine arts and visual arts aspects. In graphic
design in my school it was more about paper, it had nothing to do with
computers. In art school, that was more the case. So I grew into graphic
design.
Ordering coffee and change of background music: Oh yeah, African beats!

So, yes. I was looking at graphic design that was more computer based than
in art school. I wasn’t so interested in like regular illustration as a graphic
design. Graphic design has really got three purposes: to persuade people,
that’s advertising; to entertain people, movie posters, music album covers,
illustration magazines; and there is also graphic design to inform people,
in England it’s called ‘information design’, in the US it’s called ‘information
architecture’ ... stucturing websites, information design. Obviously a big
part of that is typography, so that’s why I got interested in typography, via
information design. I studied at Ravensbourne college in London, what
I applied for was graphic information design. I started working at the IT
department, and that really kept me going to that college, I wasn’t so happy
with the direction of the courses. The IT department there was really really
good and I ended up switching to the interaction design course, because that
had more freedom to do the kind of typographic work I was intersted in.
So I ended up looking at Free Sofware design tools because I became frustrated by the limitations of the Adobe software which in the college was
using, just what everybody used. And at that point I realized what ‘software freedom’ meant. I’ve been using Debian since I was like a teenager,
but I hadn’t really looked to the depth of what Free Software was about. I
mean back in the nineties Windows wasn’t very good but probably at that
time 2003-2004, MacOSX came out and it was getting pretty nice to use.
I bought a Mac laptop without really thinking about it and because it was
a Unix I could use the software like I was used to do. And I didn’t really
think about the issues with Free Software, MacOSX was Unix so it was the
same I figured. But when I started to do my work I really stood against the
limitations of Adobe software, specifically in parallel publishing which is
when you have the same basic informations that you want to communicate
in different mediums. You might want to publish something in .pdf, on the
web, maybe also on your mobile phone, etc. And doing that with Adobe
156

software back then was basically impossible. I was aware of Free Software
design tools and it was kind of obvious that even if they weren’t very pushed
by then they at least had the potential to be able to do this in a powerful
way. So that’s what I figured out. What that issue with Free Software really
meant. Who’s in control of the software, who decides what it does, who
decides when it’s going to support this feature or that feature, because the
features that I wanted, Adobe wasn’t planning to add them. So that’s how I
got interested in Free Software.
When I graduated I was looking for something that I could contribute in
this area. And one of the Scribus guys, Peter Linnell, made an important
post on the Scribus blog. Saying, you know, the number one problem
with Free Software design is fonts, like it’s dodgy fonts with incorrect this,
incorrect that, have problems when printed as well ... and so yeah, I felt
woa, I have a background in typography and I know about Free Software,
I could make contributions in fonts. Looking into that area, I found that
there was some postgraduate course you can study at in Europe. There’s
two, there is one at The Hague in The Netherlands and one at Reading.
They’re quite different courses in their character and in how much they cost
and how long they last for and what level of qualification they are. But
they’re both postgraduate courses which focus on typeface design and font
software development. So if you’re interesed in that area, you can really
concentrate for about a year and bring your skills up to a high professional
level. So I applied to the course at Reading and I was accepted there and
I’m currently studying there part time. I’m studying there to work on Free
Software fonts. So that’s the full story of how I ended up in this area.
Excellent! Last time we met, you summarized in a very relevant way the
history of font design software which is a proof by itself that everything is related
with fonts and this kind of small networks and I would like you to summarize it
again.
H

L

a

u

g

h

i

n

g

Alright. In that whole journey of getting into this area of parallel publishing and automated design, I was asking around for people who
worked in that area because at that time not many people had worked in
parallel publishing. It’s a lot of a bigger deal now, especially in the Free
Software community where we have Free Software manuals translated into
DC

157

many languages, written in .doc and .xml and then transformed into print
and web versions and other versions. But back then this was kind of a new
concept, not all people worked on it. And so, asking around, I heard about
the department of typography at the university of Reading. One of the lecturers there, actually the lecturer of the typeface design course put me on
to a designer in Holland, Petr van Blokland. He’s a really nice guy, really
friendly. And I dropped him an e-mail as I was in Holland that year – just
dropped by to see him and it turned out he’s not only involved in parallel
publishing and automated design, but also in typedesign. For him there is
really no distinctions between type design and typography. It’s kind of like a
big building – you have the architecture of the building but you can also go
down into the bricks. It’s kind of like that with typography, the type design
is all these little pieces you assembly to create the typography out of . He’s
an award-winning typeface designer and typographer and he was involved
in the early days of typography very actively. He kind of explained me the
whole story of type design technology.
C

o

f

f

e

e

d

e

l

i

v

e

r

y

a

n

d

j

a

z

z

m

u

s

i

c

So, the history of typography actually starts with Free Software, with Donald
Knuth and his TeX. The TeX typesetting system has its own font software
or font system called Metafont. Metafont is a font programming language,
and algebraic programming language describing letter forms. It really gets
into the internal structure of the shapes. This is a very non-visual programming approach to it where you basically use this programming language to
describe with algebra how the shapes make up the letters. If you have a
capital H, you got essentially 3 lines, two verticals stands and a horizontal
crossbar and so, in algebra you can say that you’ve got one ratio whitch is
the height of the vertical lines and another ratio which is the width between
them and another ratio which is the distance between the top point and the
middle point of the crossbar and the bottom point. By describing all of that
in algebra, you really describe the structure of that shape and that gives you
a lot of power because it means you can trace a pen nib objects over that
skeleton to generate the final typeform and so you can apply variations, you
can rotate the pen nib – you can have different pen nib shapes And you can
have a lot of different typefaces out of that kind of source code. But that
approach is not a visual approach, you have to take it with a mathematical
158

mind and that isn’t something which graphic designers typically have as a
strong part of their skill set.

The next step was describing the outline of a typeface, and the guy who
did this was working, I believe, at URW. He invented a digital typography
system or typedesign program called Ikarus. The rumor is it’s called Ikarus
because it crashed too much. Peter Karow is this guy. He was the absolute
unknown real pioneer in this area. They were selling this proprietary software powered by a tablet, with a drawing pen for entering the points and it
used it’s own kind of spline-curve technology.
This was very expensive – it ran on DMS computers and URW was making
a lot of money selling those mini computers in well I guess late 70s and
early 80s. And there was a new small home computer that came out called
the Apple Macintosh. This was quite important because not only was it a
personal computer. It had a graphical user interface and also a printer, a laser
writer which was based on the Adobe PostScript technology. This was what
made desktop publishing happen. I believe it was a Samsung printer revised
by Apple and Adobe’s PostScript technology. Those three companies, those
three technologies was what made desktop publishing happen. Petr van
Blokland was involved in it, using the Ikarus software, developing it. And
so he ported the program to the Mac. So Ikarus M was the first font
editor for personal computers and this was taken on by URW but never
really promoted because the ... Mac costs not a lot money compared to those
big expensive computers. So, Ikarus M was not widely distributed. It’s
kind of an obvious idea – you know you have those innovative computers
doing graphic interfaces and laser printing and several different people had
several different ideas about how to employ that. Obviously you had John
Warnock within Adobe and at that point Adobe was a systems company,
they made this PostScript system and these components, they didn’t make
any user applications. But John Warnock – and this is documented in the
book on the Adobe story – he really pushed within the company to develop
Adobe Illustrator, which allowed you to interact with the edit PostScript
code and do vector drawings interactively. That was the kind of illustration
and graphic design which we mentioned earlier. That was the ... page layout
sort of thing and that was taking care of by a guy called Paul Brainerd,
whose company Aldus made PageMaker. That did similar kind of things
than Illustrator, but focused on page layout and typography, text layout
159

rather than making illustrations. So you had Illustrator and PageMaker and
this was the beginning of the desktop publishing tool-chain.
When was it?

H

This is in the mid-eighties. The Mac came out in 1984

DC

Pierre Huyghebaert (PH)

Illustrator in 1986 I think.

Yeah. And then the Apple LaserWriter, which is I believe a Samsung
printer, came out in 1985, and I believe the first edition of Illustrator was in
1988 ...
DC

No, I think Illustrator 1 was in 1986.

PH
DC
H

OK, if you read the official Adobe story book, it’s fully documented 1 .

It’s interesting that it follows so quickly after the Macintosh.

Yes! That’s right. It all happened very quickly because Adobe and
Apple had really built with PostScript and the MacOS, they had the infrastructure there, they could build on top of. And that’s a common thing we
see played out over and over ... Things are developed quite slowly when they
are getting the infrastructure right, and then when the infrastructure is in
place you see this burst of activity where people can slot it together very
quickly to make some interesting things. So, you had this other guy called
Jim von Ehr and he saw the need for a graphical user interface to develop
fonts with and so he founded a small compagny called Altsys and he made a
program called Fontographer. So that became the kind of de-facto standard
font editing program.
DC

PH

used?

And before that, do you know what font design software Adobe designers

I don’t know. Basically when Adobe made PostScript for the Apple
LaserWriter then they had the core 35 PostScript fonts, which is about
a thousand families, 35 differents weights or variants of the fonts. And I
believe that those were from Linotype. Linotype developed that in collaboration with Adobe, I have no idea about what software they used, they
may have had their own internal software. I know that before they had
DC

1

Pamela Pfiffner. Inside the Publishing Revolution: The Adobe Story. Adobe Press, 2008

160

Illustrator they were making PostScript documents by hand like TeX, programming PostScript sourcecode. It might have been in a very low tech way.
Because those were the core fonts that have been used in PostScript.
So you had Fontographer and this is yeah I mean a GUI application for
home computers to make fonts with. Fontographer made early 90s David
Carson graphic design posters. Because it meant that anybody could start
making fonts not only people that were in the type design guild. That all
David Carson kind of punk graphic design, it’s really because of Desktop
publishing and specifically because of Fontographer. Because that allowed
people to make these fonts. Previous printing technologies wouldn’t allow
you to make these kinds of fonts without extreme efforts. I mean a lot of the
effects you can do with digital graphics you can’t do without digital graphics
– air brushing sophisticated effects like that can be achieved but it’s really a
lot of efforts.

So going back to the guys from Holland, Petr has a younger brother called
Erik and he went to the college at the Royal Academie of design the KABK
in the Hague with a guy who is Just van Rossum and he’s the younger
brother of Guido van Rossum who is now quite famous because he’s the guy
who developed and invented Python. In the early 90s Jim von Ehr is developping Fontographer, and Fontographer 4 comes out and Petr and Just and
Erik managed to get a copy of the source code of Fontographer 3 which is the
golden version that we used, like Quark, that was what we used throughout
most of the 90s and so they started adding things to that to do scripting on
Fontographer with Python and this was called Robofog, and that was still
used until quite recently, because it had features no one has ever seen enywhere else. The deal was you had to get a Fontographer 4 license, and then
you could get a Robofont license, for Fontographer 3. Then Apple changed
the system architecture and that meant Fontographer 3 would no longer
run on Apple computers. Obviously that was a bit of a damn on Robofog.
Pretty soon after that Jim sold Fontographer to Macromedia. He and his
employes continued to develop Fontographer into Freehand, it went from a
font drawing application into a more general purpose illustration tool. So
Macromedia bought Altsys for Freehand because they were competing with
Adobe at that time. And they didn’t really have any interest in continuing
to develop Fontographer. Fonts is a really obscure kind of area. As a proprietary software company, what you are doing things to make a profit and if
161

the market is too small to justify your investment then you’ll just not keep
developing the software. Fontographer shut at that point.
PH

I think they paid one guy to maintain it and answer questions.

Yeah. I think they even stop actively selling it, you had to ask them to
sell you a license. Fontographer has stopped at that point and there was no
actively developed font editor. There were a few Windows programs, which
were kind of shareware for developing fonts because in this time Apple and
Microsoft got fed up with paying Adobe’s extortion of PostScript licensing
fees. They developed their own font format called TrueType. There were
Windows font editing programs.
Yeah. I think they even stop actively selling it, you had to ask them to sell
you a license. Fontographer has stopped at that point and there was no actively developed font editor. There were a few Windows programs, which
were kind of shareware for developing fonts because in this time Apple and
Microsoft got fed up with paying Adobe’s extortion of PostScript licensing fees. They developed their own font format called TrueType. When
Fontographer stopped there was the question of which one will become the
predominant font editor and so there was Fontlab. This was developed by
a guy Yuri Yarmola, Russian originally I believe, and it became the primary
proprietary type design tool.
The Python guys from Holland started using Fontlab. They managed to
convince the Fontlab guys to include Python scripting support in Fontlab.
Python had become a major language, for doing this kind of scripting. So
Fontlab added in Python scripting. And then different type designers, font
developers started to use Python scripts to help them develop their fonts,
and a few of the guys doing that decided to join up and they created the
RoboFab project which took the ideas that had been developed for Robofob
and reimplemented them with Fontlab – so RoboFab. This is now a Free
Software package, under the MIT Python style licence. So it is a Free
Software licence but without copyleft. It has beeing developed as a collaborative project. If you’re interested in the development you can just join the
mailing list. It’s a very mature project and the really beautiful thing about
it that they developed a font object model and so in Python you have a very
clean and easily understandable object-oriented model of what a font is. It
makes it very easy to script things. This is quite exciting because that means
you can start to do things which are just not really visible with the graphic
design interface. The thing with those fonts is like there is a scale, it is like
DC

162

architecture. You’ve got the designer of the building and the designer of
the bricks. With a font it is the same. You have the designer who shapes
each letter and then you’ve got the character-spacing which makes what a
paragraph will look like. A really good example of this is if you want to do
interpolation, if you have a very narrow version of a font and a very wide one,
and you want to interpolate in different versions between those two masters
– you really want to do that in a script, and RoboFab makes this really easy
to do this within Fontlab. The ever important thing about RoboFab was
that they developed UFO, I think it’s the Universal Font Object – I’m not
sure what the exact name is – but it’s a XML font format which means that
you can interchange font source data with different programs and specifically
that means that you have a really good font interpolation program that can
read and write that UFO XML format and then you can have your regular
type design format font editor that will generate bitmap font formats that
you actually use in a system. You can write your own tool for a specific
task and push and pull the data back and forth. Some of these Dutch guys,
especially Erik has written a really good interpolation tool. So, as a kind
of thread in the story of font. Remember that time where Fontographer
was not developed actively then you have Georges Williams from California
who was interested in digital typography and fonts and Fontographer was
not being activelly developed and he found that quite frustrating so he said
like Well, I’ll write my own font editor. He wrote it from scratch. I mean
this is a great project.
LL

Can you tell us some details about your course?

DC
There are four main deliverables in the course, that you normally
do in one year, twelve months. The big thing is that you do a professional quality OpenType font, with an extended pan-european latin coverage in regular and italic, maybe bold. You also do a complex non-latin
in Arabic, Indic, maybe Cyrillic ... well not really Cyrillic because there are
problems to get a Cyrillic type experts from Russia to Britain ... or Greek,
or any script with which you have a particular background in. And so,
they didn’t mandate which software students can use, and I was already
used to FontForge, while pretty much all the other students were using
FontLab. This font development is the main thing. The second thing is
the dissertation, that goes up to 8,000 words, an academic master in typography dissertation. Then there is a smaller essay, that will be published
on http://www.typeculture.com/academic_resource/articles_essays/, and it’s

163

a kind of a practice for writing the dissertation. Then you have to document
your working process throughout the year, you have to submit your working
files, source files. Every single step is documented and you have to write
a small essay describing your process. And also, of course, apart from the
type design, you make a font specimen, so you make a very nice piece of
design that show up your font in use, as commercial companies do. All that
takes a full intense year. For British people, the course costs about £3,000,
for people in the EU, it costs about £5,000 and about £10,000 for non-EU.
Have a look at the website for details, but yes, it’s very expensive.
LL

And did you also design a font?

Yes. But I do it part-time. Normally, you could do the typeface,
and the year after you do the dissertation. For personal reasons, I do the
dissertation first, in the summer, and next year I’ll do the typeface, I think
in July next year.
DC

LL

You have an idea on which font you’ll work?

Yes. The course doesn’t specify which kind of typeface you have to
work on. But they really prefer a textface, a serif one, because it’s the most
complicate and demanding work. If you can do a high quality serif text
typeface design, you can do almost any typeface design! Of course, lots of
students do also a sans serif typeface to be read at 8 or 9 points, or even
for by example dictionaries at 6 or 7 points. Other students design display
typefaces that can be used for pararaphs but probably not at 9 points ...
DC

It looks like you are asked to produce quite a lot of documents.
Are these documents published anywhere, are they available for other designers?
Femke Snelting (FS)

Yes, the website is http://www.typefacedesign.net and the teaching
team encourages students to publish their essays, and some people have
published their dissertation on the web, but it varies. Of course, being an
academic dissertation, you can request if from the university.
DC

I’m asking because in various presentations the figure of the ‘expert typographer’ came up, and the role Open Source software could have, to open up this
guild.
FS

Yeah, the course in The Hague is cheaper, the pound was quite high so
it’s expensive to live in Britain during the last year, and the number of people
able to produce high quality fonts is pretty small ... And these courses are
DC

164

quite inaccessible for most of the people because of being so expensive, you
have to be quite commited to follow them. The proprietary font editing
software, even with a student discount, is also a bit expensive. So yes, Free
and Open Source software could be an enabler. FontForge allows anybody
to grab it on the Internet and start making fonts. But having the tools
is just the beginning. You have to know what you’re doing to a design a
typeface, and this is separate from font software techinques. And books
on the subject, there are quite a few, but none are really a full solution.
There www.typophile.org, a type design forum on the web, where you can
post preliminary designs. But of course you do not get the kind of critical
feedback as you can get on a masters course ...

FS
We talked to Denis Jacquerye from the DéjàVu project, and most of the
people who collaborate on the project are not type designers but people who are
interested in having certain glyphs added to a typeface. And we asked him if
there is some kind of teaching going on, to be sure that the people contributing
understand what they are doing. Do you see any way of, let’s say, a more open
way of teaching typography starting to happen?

Yeah, I mean, that the part of why the Free Software movement is
going to branch down into the Free Culture movement. There is that website Freedom Defined 2 that states that the principles of Free Software can
apply to all other kind of works. This isn’t shared by everybody in the Free
Software movement. Richard Stallman makes a clear difference between
three kind of works: the ones that function like software, encyclopedias,
dictionaries, text books that tell how to makes things, and text typefaces.
Art works like music and films, and text works about opinions like scientific papers or political manifestos. He believes that different kinds of rights
should apply for that different kind of works. There is also a different view
in which anything in a computer can be edited ought to be free like Free
Software. That is certainly a position that many people take in the Free
Software community. In the WikiMedia Foundation text books project,
you can see that when more and more people are involved in typeface design
from the Free Culture community, we will see more and more education
material. There will be a snowball effect.

DC

PH

2

Dave, we are running out of time ...

http://freedomdefined.org

165

So just to finish about the FontForge Python scripting ... There is
Python embeded in FontForge so you can run scripts to control FontForge,
you can add new features that maybe would be specific to your font and then
in FontForge there is also a Python module which means that you can type
into a Python interpretor. You type import fontforge and if it doesn’t
give you an error then you can start to do FontForge functions, just like in
the RoboFab environment. And in the process of adding that George kind
of re-architectured the FontForge source code so instead of being one large
program, there is now a large C library, libfontforge, and then a small C
program for rendering and also the Python module, a binding or interface
to that C library. This means if you are an application programmer it is very
straightforward to make a new font editor in whatever language you want,
using whatever graphic toolkit you want. So if you’re a JDK guy or a GTK
guy or even if you’re on Windows or Mac OS X, you can make a font editor
that has all the functionality of FontForge. FontForge is a kind of engine to
make font editors. This is quite exciting because it means it’s pretty straight
forward for somebody to write a font editing program which is designed for,
say, beginners.
So, to come back to what we were just talking about in term of educational
materials to get people new to typeface design to be confident with themselves. Maybe they won’t be in that professional level yet, but they will be
pleased with their own work and happy to work in a user interface where
you feel like in 2006, you know, with nice icons nice windows; anti aliasing
and these kind of things.
I mean there’s nothing wrong with the FontForge interface. It is what it
is. But it scares a lot of people away, people say that they don’t like this. I
think it is too scary, too different. I think we are going to see some exciting
stuff in the next few years in the Free Software font editor space.
DC

166

At the Libre Graphics Meeting 2008 in Wroclaw, just before
Michael Terry presents his project ingimp to an audience of
curious GIMP developers and users, we meet up to talk more
about ‘instrumenting GIMP’ and about the way Terry thinks
data analysis could be done as a form of discourse. Michael
Terry is a computer scientist working at the Human Computer
Interaction Lab of the University of Waterloo, Canada and his
main research focus is on improving usability in Open Source
software. We speak about ingimp, a clone of the popular image
manipulation programme GIMP, but with an important difference: ingimp allows users to record data about their usage in to
a central database, and subsequently makes this data available to
anyone. This conversation was also published in the Constant
publication Tracks in electr(on)ic fields.
Maybe we could start this conversation with a description of the ingimp project
you are developing and why you chose to work on usability for GIMP?
So the project is ‘ingimp’, which is an instrumented version of GIMP, it
collects information about how the software is used in practice. The idea is
you download it, you install it, and then with the exception of an additional
start up screen, you use it just like regular Gimp. So, our goal is to be as
unobtrusive as possible to make it really easy to get going with it, and then
to just forget about it. We want to get it into the hands of as many people
as possible, so that we can understand how the software is actually used in
practice. There are plenty of forums where people can express their opinions
about how GIMP should be designed, or what’s wrong with it, there are
plenty of bug reports that have been filed, there are plenty of usability issues
that have been identified, but what we really lack is some information about
how people actually apply this tool on a day to day basis. What we want
to do is elevate discussion above just anecdote and gut feelings, and to say,
well, there is this group of people who appear to be using it in this way,
these are the characteristics of their environment, these are the sets of tools
171

they work with, these are the types of images they work with and so on, so
that we have some real data to ground discussions about how the software
is actually used by people. You asked me now why GIMP? I actually used
GIMP extensively for my PhD work. I had these little cousins come down
and hang out with me in my apartment after school, and I would set them
up with GIMP, and quite often they would always start off with one picture,
they would create a sphere, a blue sphere, and then they played with filters
until they got something really different. I would turn to them looking
at what they had been doing for the past twenty minutes, and would be
completely amazed at the results they were getting just by fooling around
with it. And so I thought, this application has lots and lots of power, I’d
like to use that power to prototype new types of interface mechanisms. So
I created JGimp, which is a Java based extension for the 1.0 GIMP series,
that I can use as a back-end for prototyping novel user interfaces. I think
that it is a great application, there is a lot of power to it, and I had already
an investment in its code base so it made sense to use that as a platform for
testing out ideas of open instrumentation.
What is special about ingimp, is the fact that the data you generate is made by
the software you are studying itself. Could you describe how that works?
Every bit of data we collect, we make available: you can go to the website,
you can download every log file that we have collected. The intent really
is for us to build tools and infrastructure so that the community itself can
sustain this analysis, can sustain this form of usability. We don’t want to
create a situation where we are creating new dependencies on people, or
where we are imposing new tasks on existing project members. We want to
create tools that follow the same ethos as Open Source development, where
anyone can look at the source code, where anyone can make contributions,
from filing a bug to doing something as simple as writing a patch, where
they don’t even have to have access to the source code repository, to make
valuable contributions. So importantly, we want to have a really low barrier
to participation. At the same time, we want to increase the signal-to-noise
ratio. Yesterday I talked with Peter Sikking, an information architect working for GIMP, and he and I both had this experience where we work with
user interfaces, and since everybody uses an interface, everybody feels they
are an expert, so there can be a lot of noise. So, not only did we want to
create an open environment for collecting this data, and analysing it, but we
172

also want to increase the chance that we are making valuable contributions,
and that the community itself can make valuable contributions. Like I said,
there is enough opinion out there. What we really need to do is to better
understand how the software is being used. So, we have made a point from
the start to try to be as open as possible with everything, so that anyone can
really contribute to the project.
ingimp has been running for a year now. What are you finding?
I have started analysing the data, and I think one of the things that we
realised early on is that it is a very rich data set; we have lots and lots of
data. So, after a year we’ve had over 800 installations, and we’ve collected
about 5000 log files, representing over half a million commands, representing thousands of hours of the application being used. And one of the things
you have to realise is that when you have a data set of that size, there are so
many different ways to look at it that my particular perspective might not
be enough. Even if you sit someone down, and you have him or her use the
software for twenty minutes, and you videotape it, then you can spend hours
analysing just that twenty minutes of videotape. And so, I think that one of
the things we realised is that we have to open up the process so that anyone
could easily participate. We have the log files available, but they really didn’t
have an infrastructure for analysing them. So, we created this new piece of
software called ‘StatsJam’, an extension to MediaWiki, which allows anyone
to go to the website and embed SQL-queries against the ingimp data set
and then visualise those results within the Wiki text. So, I’ll be announcing
that today and demonstrating that, but I have been using that tool now for
a week to complement the existing data analysis we have done. One of the
first things that we realized is that we have over 800 installations, but then
you have to ask, how many of those are really serious users? A lot of people
probably just were curious, they downloaded it and installed it, found that it
didn’t really do much for them and so maybe they don’t use it anymore. So,
the first thing we had to do is figure out which data points should we really
pay attention too. We decided that a person should have saved an image,
and they should have used ingimp on two different occasions, preferably at
least a day apart, where they’d saved an image on both of the instances. We
used that as an indication of what a serious user is. So with that filter in
place, then the ‘800 installations’ drops down to about 200 people. So we
had about 200 people using ingimp, and looking at the data this represents
173

about 800 hours of use, about 4000 log files, and again still about half a million commands. So, it’s still a very significant group of people. 200 people
is still a lot, and that’s a lot of data, representing about 11000 images they
have been working on, there’s just a lot.
From that group, what we found is that use of ingimp is really short and
versatile. So, most sessions are about fifteen minutes or less, on average.
There are outliers, there are some people who use it for longer periods of
time, but really it boils down to them using it for about fifteen minutes, and
they are applying fewer than a hundred operations when they are working on
the image. I should probably be looking at my data analysis as I say this, but
they are very quick, short, versatile sessions, and when they use it, they use
less than 10 different tools, or they apply less than 10 different commands
when they are using it. What else did we find? We found that the two
most popular monitor resolutions are 1280 by 1024 and 1024 by 768. So,
those represent collectively 60% of the resolutions, and really 1280 by 1024
represents pretty much the maximum for most people, although you have
some higher resolutions. So one of the things that’s always contentious
about GIMP, is its window management scheme and the fact that it has
multiple windows, right? And some people say, well you know this works
fine if you have two monitors, because you can throw out the tools on one
monitor and then your images are on another monitor. Well, about 10%
to 15% of ingimp users have two monitors, so that design decision is not
working out for most of the people, if that is the best way to work. These
are things I think that people have been aware of, it’s just now we have
some actual concrete numbers where you can turn to and say, now this is
how people are using it. There is a wide range of tasks that people are
performing with the tool, but they are really short, bursty tasks.
Every time you start up ingimp, a screen comes up asking you to describe what
you are planning to do and I am interested in the kind of language users invent
to describe this, even when they sometimes don’t know exactly what it is they are
going to do. So inventing language for possible actions with the software, has in
a way become a creative process that is now shared between interface designer,
developer and user. If you look at the ‘activity tags’ you are collecting, do you
find a new vocabulary developing?
I think there are 300 to 600 different activity tags that people register
within that group of ‘significant users’. I didn’t have time to look at all of
174

them, but it is interesting to see how people are using that as a medium
for communicating to us. Some people will say, Just testing out, ignore this!
Or, people are trying to do things like insert HTML code, to do like a
cross-site scripting attack, because, you have all the data on the website, so
they will try to play with that. Some people are very sparse and they say
‘image manipulation’ or ‘graphic design’ or something like that, but then
some people are much more verbose, and they give more of a plan, This
is what I expect to be doing. So, I think it has been interesting to see how
people have adopted that and what’s nice about it, is that it adds a really nice
human element to all this empirical data.
I wanted to ask you about the data, without getting too technical, could
you explain how these data are structured, what do the log files look like?

So the log files are all in XML, and generally we compress them, because
they can get rather large. And the reason that they are rather large is that we
are very verbose in our logging. We want to be completely transparent with
respect to everything, so that if you have some doubts or if you have some
questions about what kind of data has been collected, you should be able to
look at the log file, and figure out a lot about what that data is. That’s how
we designed the XML log files, and it was really driven by privacy concerns
and by the desire to be transparent and open. On the server side we take
that log file and we parse it out, and then we throw it into a database, so
that we can query the data set.
Now we are talking about privacy ... I was impressed by the work you have done
on this; the project is unusually clear about why certain things are logged, and
other things not; mainly to prevent the possibility of ‘playing back’ actions so that
one could identify individual users from the data set. So, while I understand
there are privacy issues at stake I was wondering ... what if you could look at the
collected data as a kind of scripting for use? Writing a choreography that might
be replayed later?
Yes, we have been fairly conservative with the type of information that we
collect, because this really is the first instance where anyone has captured
such rich data about how people are using software on a day to day basis,
and then made it all that data publicly available. When a company does
175

this, they will keep the data internally, so you don’t have this risk of someone outside figuring something out about a user that wasn’t intended to be
discovered. We have to deal with that risk, because we are trying to go about
this in a very open and transparent way, which means that people may be
able to subject our data to analysis or data mining techniques that we haven’t
thought of and extract information that we didn’t intent to be recording in
our file, but which is still there. So there are fairly sophisticated techniques
where you can do things like look at audio recordings of typing and the timings between keystrokes, and then work backwards with the sounds made
to figure out the keys that people are likely pressing. So, just with keyboard
audio and keystroke timings alone you can often give enough information
to be able to reconstruct what people are actually typing. So we are always
sort of weary about how much information is in there. While it might be
nice to be able to do something like record people’s actions and then share
that script, I don’t think that that is really a good use of ingimp. That said,
I think it is interesting to ask, could we characterize people’s use enough, so
that we can start clustering groups of people together and then providing a
forum for these people to meet and learn from one another? That’s something we haven’t worked out. I think we have enough work cut out for us
right now just to characterize how the community is using it.
It was not meant as a feature request, but as a way to imagine how usability
research could flip around and also become productive work.

Yes, totally. I think one of the things that we found when bringing people
into to assess the basic usability of the ingimp software and ingimp website,
is that people like looking at things like what commands other people are
using, what the most frequently used commands are, and part of the reason
that they like that, is because of what it teaches them about the application.
So they might see a command they were unaware of. So we have toyed with
the idea of then providing not only the command name, but then a link
from that command name to the documentation – but I didn’t have time to
implement it, but certainly there are possibilities like that, you can imagine.

Maybe another group can figure something out like that? That’s the beauty of
opening up your software plus data set of course. Well, just a bit more on what
is logged and what not ... Maybe you could explain where and why you put the
limit and what kind of use you might miss out on as a result?
176

I think it is important to keep in mind that whatever instrument you use
to study people, you are going to have some kind of bias, you are going
to get some information at the cost of other information. So if you do a
video taped observation of a user and you just set up a camera, then you
are not going to find details about the monitor maybe, or maybe you are
not really seeing what their hands are doing. No matter what instrument
you use, you are always getting a particular slice. I think you have to work
backwards and ask what kind of things do you want to learn. And so the
data that we collect right now, was really driven by what people have done
in the past in the area of instrumentation, but also by us bringing people
into the lab, observing them as they are using the application, and noticing
particular behaviours and saying, hey, that seems to be interesting, so what
kind of data could we collect to help us identify those kind of phenomena,
or that kind of performance, or that kind of activity? So again, the data that
we were collecting was driven by watching people, and figuring out what
information will help us to identify these types of activities. As I’ve said,
this is really the first project that is doing this, and we really need to make
sure we don’t poison the well. So if it happens that we collect some bit of
information, that then someone can later say, Oh my gosh, here is the person’s
file system, here are the names they are using for the files or whatever, then it’s
going to make the normal user population weary of downloading this type
of instrumented application. This is the thing that concerns me most about
Open Source developers jumping into this domain, is that they might not
be thinking about how you could potentially impact privacy.
I don’t know, I don’t want to get paranoid. But if you are doing it, then
there is a possibility someone else will do it in a less considerate way.
I think it is only a matter of time before people start doing this, because
there are a lot of grumblings about, we should be doing instrumentation, someone just needs to sit down and do it. Now there is an extension out for Firefox
that will collect this kind of data as well, so you know ...
Maybe users could talk with each other, and if they are aware that this
type of monitoring could happen, then that would add a different social
dimension ...
177

It could. I think it is a matter of awareness, really, so when we bring
people into the lab and have them go to the ingimp website, download and
install it and use it, and go check out the stats on the website, and then we
ask questions like, what kind of data are we collecting? We have a lengthy
concern agreement that details the type of information we are collecting and
the ways your privacy could be impacted, but people don’t read it.
So concretely ... what information are you recording, and what information are
you not recording?
We record every command name that is applied to a document, to an image.
Where your privacy is at risk with that, is that if you write a custom script,
then that custom script’s name is going to be inserted into a log file. And so
if you are working for example for Lucas or DreamWorks or something like
that, or ILM, in some Hollywood movie studio and you are using ingimp
and you are writing scripts, then you could have a script like ‘fixing Shrek’s
beard’, and then that is getting put into the log file and then people are
going to know that the studio uses ingimp. We collect command names,
we collect things like what windows are on the screen, their positions, their
sizes, we take hashes of layer names and file names. We take a string and
then we create a hash code for it, and we also collect information about how
long is this string, how many alphabetical characters, numbers, things like
that, to get a sense of whether people are using the same files, the same
layer names time and time again, and so on. But this is an instance where
our first pass at this, actually left open the possibility of people taking those
hashes and then reconstructing the original strings from that. Because we
have the hash code, we have the length of the string, all you have to do is
generate all possible strings of that length, take the hash codes and figure
out which hashes match. And so we had to go back and create a new
scheme for recording this type of information where we create a hash and
we create a random number, we pair those up on the client machine but
we only log the random number. So, from log to log then, we can track if
people use the same image names, but we have no idea of what the original
string was. There are these little ‘gotchas’, things to look out for, that I
don’t think most people are aware of, and this is why I get really concerned
about instrumentation efforts right now, because there isn’t this body of
experience of what kind of data should we collect, and what shouldn’t we
collect.
178

As we are talking about this, I am already more aware of what data I would allow
to be collected. Do you think by opening up this data set and the transparent
process of collecting and not collecting, this will help educate users about these
kinds of risks?
It might, but honestly I think probably the thing that will educate people
the most is if there was a really large privacy error and that it got a lot of
news, because then people would become more aware of it because right
now – and this is not to say that we want that to happen with ingimp – but
when we bring people in and we ask them about privacy, Are you concerned
about privacy?, and they say No, and we say Why? Well, they inherently trust
us, but the fact is that Open Source also lends a certain amount of trust to
it, because they expect that since it is Open Source, the community will in
some sense police it and identify potential flaws with it.

Is that happening?
Are you in dialogue with the Open Source community about this?

No, I think probably five to ten people have looked at the ingimp code –
realistically speaking I don’t think a lot of people looked at it. Some of the
GIMP developers took a gander at it to see how could we put this upstream,
but I don’t want it upstream, because I want it to always be an opt-in, so
that it can’t be turned on by mistake.
You mean you have to download ingimp and use it as a separate program? It
functions in the same way as GIMP, but it makes the fact that it is a different
tool very clear.

Right. You are more aware, because you are making that choice to download
that, compared to the regular version. There is this awareness about that.
We have this lengthy text based consent agreement that talks about the data
we collect, but less than two percent of the population reads license agreements. And, most of our users are actually non-native English speakers,
so there are all these things that are working against us. So, for the past
year we have really been focussing on privacy, not only in terms of how we
collect the data, but how we make people aware of what the software does.
We have been developing wordless diagrams to illustrate how the software
179

functions, so that we don’t have to worry about localisation errors as much.
And so we have these illustrations that show someone downloading ingimp,
starting it up, a graph appears, there is a little icon of a mouse and a keyboard on the graph, and they type and you see the keyboard bar go up, and
then at the end when they close the application, you see the data being sent
to a web server. And then we show snapshots of them doing different things
in the software, and then show a corresponding graph change. So, we developed these by bringing in both native and non-native speakers, having
them look at the diagrams and then tell us what they meant. We had to go
through about fifteen people and continual redesign until most people could
understand and tell us what they meant, without giving them any help or
prompts. So, this is an ongoing research effort, to come up with techniques
that not only work for ingimp but also for other instrumentation efforts, so
that people can become more aware of the implications.
Can you say something about how this type of research relates to classic usability
research and in particular to the usability work that is happening in Gimp?
Instrumentation is not new, commercial software companies and researchers
have been doing instrumentation for at least ten years, probably ten to
twenty years. So, the idea is not new but what is new, in terms of the
research aspects of this, is how do we do this in a way where we can make
all the data open? The fact that you make the data open, really impacts your
decision about the type of data you collect and how you are representing it.
And you need to really inform people about what the software does. But I
think your question is ... how does it impact the GIMP’s usability process?
Not at all, right now. But that is because we have intentionally been laying
off to the side, until we got to the point where we had an infrastructure,
where the entire community could really participate with the data analysis.
We really want to have this to be a self-sustaining infrastructure, we don’t
want to create a system where you have to rely on just one other person for
this to work.

What approach did you take in order to make this project self-sustainable?

Collecting data is not hard. The challenge is to understand the data, and I
don’t want to create a situation where the community is relying on only one
180

person to do that kind of analysis, because this is dangerous for a number of
reasons. First of all, you are creating a dependency on an external party, and
that party might have other obligations and commitments, and might have
to leave at some point. If that is the case, then you need to be able to pass the
baton to someone else, even if that could take a considerate amount of time
and so on. You also don’t want to have this external dependency, because
of the richness in the data, you really need to have multiple people looking
at it, and trying to understand and analyse it. So how are we addressing
this? It is through this StatsJam extension to the MediaWiki that I will
introduce today. Our hope is that this type of tool will lower the barrier
for the entire community to participate in the data analysis process, whether
they are simply commenting on the analysis we made or taking the existing
analysis, tweaking it to their own needs, or doing something brand new.

In talking with members of the GIMP project here at the Libre Graphics
Meeting, they started asking questions like, So how many people are doing
this, how many people are doing this and how many this? They’ll ask me while
we are sitting in a café, and I will be able to pop the database open and say, A
certain number of people have done this, or, no one has actually used this tool at
all. The danger is that this data is very rich and nuanced, and you can’t really
reduce these kind of questions to an answer of N people do this, you have to
understand the larger context. You have to understand why they are doing
it, why they are not doing it. So, the data helps to answer some questions,
but it generates new questions. They give you some understanding of how
the people are using it, but then it generates new questions of, Why is this
the case? Is this because these are just the people using ingimp, or is this
some more widespread phenomenon? They asked me yesterday how many
people are using this colour picker tool – I can’t remember the exact name –
so I looked and there was no record of it being used at all in my data set. So
I asked them when did this come out, and they said, Well it has been there at
least since 2.4. And then you look at my data set, and you notice that most of
my users are in the 2.2 series, so that could be part of the reasons. Another
reason could be, that they just don’t know that it is there, they don’t know
how to use it and so on. So, I can answer the question, but then you have
to sort of dig a bit deeper.
You mean you can’t say that because it is not used, it doesn’t deserve any attention?
181

Yes, you just can’t jump to conclusions like that, which is again why we
want to have this community website, which shows the reasoning behind
the analysis. Here are the steps we had to go through to get this result, so
you can understand what that means, what the context means, because if you
don’t have that context, then it’s sort of meaningless. It’s like asking, what
are the most frequently used commands? This is something that people
like to ask about. Well really, how do you interpret that? Is it the numbers
of times it has been used across all log files? Is it the number of people
that have used it? Is it the number of log files where it has been used at
least once? There are lots and lots of ways in which you can interpret this
question. So, you really need to approach this data analysis as a discourse,
where you are saying, here are my assumptions, here is how I am getting to
this conclusion, and this is what it means for this particular group of people.
So again, I think it is dangerous if one person does that and you become to
rely on that one person. We really want to have lots of people looking at it,
and considering it, and thinking about the implications.
Do you expect that this will impact the kind of interfaces that can be done for
GIMP?
I don’t necessarily think it is going to impact interface design, I see it
really as a sort of reality check: this is how communities are using the
software and now you can take that information and ask, do we want to
better support these people or do we ... For example on my data set, most
people are working on relatively small images for short periods of time,
the images typically have one or two layers, so they are not really complex
images. So regarding your question, one of the things you can ask is, should
we be creating a simple tool to meet these people’s needs? All the people are
is just doing cropping and resizing, fairly common operations, so should we
create a tool that strips away the rest of the stuff? Or, should we figure out
why people are not using any other functionality, and then try to improve
the usability of that? There are so many ways to use data I don’t really
know how it is going to be used, but I know it doesn’t drive design. Design
happens from a really good understanding of the users, the types of tasks
they perform, the range of possible interface designs that are out there, lots
of prototyping, evaluating those prototypes and so on. Our data set really
is a small potential part of that process. You can say, well according to this
data set, it doesn’t look like many people are using this feature, let’s not
182

much focus too on that, let’s focus on these other features or conversely,
let’s figure out why they are not using them ... Or you might even look at
things like how big their monitor resolutions are, and say well, given the size
of the monitor resolution, maybe this particular design idea is not feasible.
But I think it is going to complement the existing practices, in the best
case.

And do you see a difference in how interface design is done in free software projects,
and in proprietary software?
Well, I have been mostly involved in the research community, so I don’t have
a lot of exposure to design projects. I mean, in my community we are always
trying to look at generating new knowledge, and not necessarily at how to
get a product out the door. So, the goals or objectives are certainly different.
I think one of the dangers in your question is that you sort of lump a lot
of different projects and project styles into one category of ‘Open Source’.
‘Open source’ ranges from volunteer driven projects to corporate projects,
where they are actually trying to make money out of it. There is a huge diversity of projects that are out there; there is a wide diversity of styles, there
is as much diversity in the Open Source world as there is in the proprietary
world. One thing you can probably say, is that for some projects that are
completely volunteer driven like GIMP, they are resource strapped. There is
more work than they can possibly tackle with the number of resources they
have. That makes it very challenging to do interface design, I mean, when
you look at interface code, it costs you 50% or 75% of a code base. That
is not insignificant, it is very difficult to hack and you need to have lots of
time and manpower to be able to do significant things. And that’s probably
one of the biggest differences you see for the volunteer driven projects, it
is really a labour of love for these people and so very often the new things
interest them, whereas with a commercial software company developers are
going to have to do things sometimes they don’t like, because that is what
is going to sell the product.

183

In 2007, OSP met with venture communist Dmytri Kleiner
and his wife Franziska, 1 late at night in the bar Le Coq in
Brussels. Kleiner had just finished his lecture InfoEnclosure-2.0
at Verbindingen/Jonctions and we wanted to ask what his ideas
about peer production could mean for the practice of designers and typographers. Referring to Benjamin Tucker, Yochai
Benkler, Marcel Mauss and of course Karl Marx, Kleiner explains how to prevent leakage at the point of scarcity through
operating within a total system of worker owned companies.
Between fundamentals of media- and information economy, he
talks about free typography and what it has to do with nuts
and bolts, the problem of working with estimates and why the
people that develop Scribus should own all the magazines it
enables.

First of all we have to be clear, our own company is very small and
doesn’t actually earn enough money to sustain itself right now. We sustain
our company at this point by taking on other projects; for example we are
here for a project that has really little to do with Telekommunisten, where
we’re helping a recruiting company in Canada, I’m in the UK for a very different reason than Telekommunisten, doing independent software development for a private company. So we’re still self-funding our company. So we
haven’t yet got to a stage where our company can actually sustain itself from
our own peer production, which is our goal. But how we plan to realize
that goal, is through peer production. To start we can sketch out a simple economic model, to understand how the economics work. Economics
work with the so called factors of production: you have land, labour and
capital. Land is natural resources, that which occurs naturally, that which
nobody produces, that just sort exists. Land, electromagnetic frequencies,
everything which naturally exists. Labour is work, something that people
do. Capital is what happens when you apply labour to land, and you create
products. Some of these products have to be consumed, and some of those
products are to be used in further production, and that’s capital. So capital

1

editor for a German publishing company

187

is the result of labour applied to land that create output that is used for
further production, and that’s tools, machines and so forth. This system
produces commodities which are consumed in the market. In this system
the dominating input in the production owns the final product, and all of
the actual value of the products is captured at that stage. So whoever sells
the product in the marketplace captures the full value of that product, the
full marginal value, or use value. All of the inputs to that process can never
make anymore than their own cost of reproduction, make their own subsistence cost. So if as a worker you’re selling your labour to somebody else
who owns the product, you’re never going to capture anymore than your
subsistence cost.
Could you make that sort of concrete?

Well, the reason that people need design is because there’s some product
that in the end requires design as an input. For instance, a simple case is
obviously a magazine, in which design is a major input. The value is always
going to be captured by the people selling the magazine. All of the inputs
to that magazine, including design, journalism, layout, administration, are
never going to capture more than their reproduction costs. So in order for
any group of workers to really capture the value of their labour, they have to
own the final product. Which means that they can’t just simply be isolated
in one field, like design. It means that the entire productive cycle has to be
owned collectively by the workers. The designers, together with the journalists, together with the administrators, have to own the magazine, otherwise
they can’t capture their full value. As a group of designers this is very difficult, because as a group of designers you’re only selling an input, you’re not
at the end owning a product. The only way to do this is by forming alliances
with other people, and not based on wages, not based on them giving you
an arbitrary amount of money for that input, which will never be higher
than reproduction cost, but based on owning together the final product. So
you contribute design, somebody else contributes journalism, somebody else
contributes administration and together you all own this magazine. Then
it is this magazine that is sold on the market that is your wage, the value
of the magazine on the market. That is the only way that you can capture
the marginal value of your labour. You have to sell the product, not the input, not labour. Marx talks about labour being itself a commodity, and that
means that you can never capture its marginal contribution of production,
you can only capture its reproduction cost. Which means what it would
188

cost to sustain a designer. A designer needs to eat, a designer needs a place
to live, to have a certain lifestyle to fit in the design community and that’s
all you get by selling your labour. You won’t get anymore because there is
no reason for the owner of the product to give you anymore. The only way
you can get more is if you own the product itself, collectively with the other
labour inputs. And I know that’s a bad answer, nobody wants to hear that
answer.
Haha!

This estimate is at the start in the possibility. Because the whole point
of a creative project is that you’re doing something that hasn’t been done
before. And we have all struggled with this before. There’s two things you
don’t know at the beginning of a contract. The first is how long it will
take and the second is what the criteria of being finished will be. You don’t
know either of those two things, and, since you don’t, determining the value
upfront of that is a complete guess. Which means that, when you agree to a
fixed-price term, you are agreeing to take on yourself the risk of the delivery
of the project. So it’s a transfer of risks. Of course the people that are buying
your labour as commodity want to put that risk back on you. They don’t
want to take the risk so they make you do that, because they can’t answer
the question of how much does it cost and how long it will take. They want
a guarantee of a fixed price and they want you to take all the risk. Which is
very unfair because it’s their product in the end; the end product is owned
by them and not by you. It’s a very exploitative relationship to force you
to take the risk for capitalizing their product. It’s a bad relationship from
the beginning. If you’re good at estimating and you know your work and
your limits and the kind of work you can do, you can make that work, and
make a living by being good at this estimates; but still first of all you’re
taking all the risk unfairly, and second you can’t make anything more than a
living. While if we’re going to build any kind of movement for social change
with these new forms of organization, we have to accumulate. Because the
political power is an extension of economic power. So if we actually think
that our peer production communities are going to have political power and
ultimately change society, that can only happen to the degree that we can
accumulate. Which means capturing more than the reproduction costs of
our labour input, it means actually capturing the full value of our labour’s
products. The Benjamin Tucker quote I mentioned before is a good way to
keep it in mind. The natural wage of labour is its product. The natural wage
189

of labour isn’t 40 an hour, it isn’t some arbitrary number. The natural wage
of labour is its product.
In our case the product is making phone calls. And we don’t offer our labour
in the form of software development, we are putting together a collective
that can do everything, develop a software and bring it to the market. It is
actually the consumer making telephone calls that will pay for it. As I said,
with it we are not actually making a sustainable living from it right now.
We are only building this. We are still making most of our sustenance by
selling our labour.
Yeah.

That’s where we are starting from. But because we are going for a
model where the end product is sold directly to the consumer, there is
not mediation. There is no capitalist owners that are buying our labour and
owning the product and then selling the product for it’s value to the market.
We are selling the product directly to the consumers of the product, so there
is nothing in-between. And all of the workers that contribute to the making
of this product, whether they are programmers or into administration or
designers, together own this product and own this company. If you’re not
selling the product, then what you’re selling is behavioural control. If you’re
not paying for the magazine directly, it is paid for with the money coming
from lobbyists or from advertisers that want to control the behaviour of the
people perceiving that media, by making them buy some things or vote in a
certain way or have a certain image of a certain state department or the role
of the state. In the economical model where the actual magazine isn’t being
sold, where the media is free, in the way television is free, the base of that
model is what Dallas Smythe calls ‘audience power’. Smythe is one of the
main writers about the politically economy of communications, and this is
sort of referred to in his ‘audience commodity’ thing, which is very degraded
and unfundamental discourse, but it’s related. ‘Audience power’, ultimately,
is just behavioural control. There is money to be made by changing the
behaviours of others. And this is the fundamental source of media funding,
sometimes it is commercials to sell an actual product by ads and sometimes
it is more subtle, like legitimizing a political system or getting people to
think favourably about a party or a state department or a government.
All the artists and the designers of the poster and the people that come to
the event, they have all kinds of motivations, use value. But the exchange
values, where the money comes from, the people buying the checks, what
190

they are buying is behavioural control, is to be represented in this context.
Through their commercial or political or legitimation purposes. The state
has legitimation needs, the state needs to be something that is thought of as
positive by people. And it does this by funding things that give a legitimacy,
like art, culture, social services. What it is buying, is this legitimation. It is
behavioural control. When an advertiser sponsors an art show or an event
or a television program what they are buying is the chance to make people
buy their product. So it is not that every single person, every single artists
in the show was thinking about how to manipulate the audience. Not at all,
they are just making art ... But where the money comes from, what they
are actually selling on the market, is behavioural control. It is the so called
‘audience power’.
How does that change the work itself you think?

It changes the way you work, a lot. There are so many restrictions
and limitations when you work on this model, on capital finance, because
the medium is constantly subverted and subjugated by the mediation, the
mediation is the message to make it a catch phrase. If you know that your
art show is being funded by a certain agency, you’re going to avoid talking
critically about that agency, because obviously that is going to deny you
funding further on. It’s clear that the sources of funding affect the actual
message that is delivered at the end. It’s not possible to have SONY Records
sponsor an art show that then tells you how SONY is evil. It is very unlikely
that it is going to be funded again, maybe you can trick them once, but it’s
not going to be sustainable. We were joking before about how my use of
anarchist and socialist terminology actually gets the most flak from other
people in my own field. That’s because they are trying to portray what we
do in Free Software development and peer production as being unpolitical.
With my saying that no, it’s actually quite political, explaining why, they
feel like I’m blowing their cover. Like I’m almost outing them as being
leftist radicals and they don’t want this image because they actually think
they can fool this system. Which I think is delusional, I don’t think you
can fool this system. But that’s a very clear example how it does actually
change the context and change the message. Because you are always selfconscious of how you’re going to pay your rent and how you’re going to pay
your bills. It’s impossible to separate yourself from this context and if the
funding is coming from these directions you’re always going to self-censor
and it’s going to affect what you talk about in your choices that you make.
191

What to present, what not to present, where to place the emphasis where
not to place the emphasis, it will always be modified by the context you are
producing in. And if what you’re being paid for is essentially to make people
like SONY or make people like the state then it’s going to change the way
you present what you are doing.
Yochai Benkler used the term ‘commons-based peer production’ and of
course took great pains to avoid talking about communism and try to limit
this only to information production. He’s very clear, for him this is not for
real material production. Because he’s a liberal lawyer, working for a major
university, in the states ... so this is how he presents his work.
But what this means, commons-based production, means that the instruments of production are actually collectively owned but controlled by the
direct producers, which means that nobody can actually earn money simply by owning the instruments of production. You can only earn money
by employing the instruments of production in actually making something.
So, commons-based peer production. You have common things like instruments of production, land and capital, they’re are commonly controlled and
commonly owned, and individual labour of peers is applied to that shared
commons and the results of that labour is then owned by the actual producers. None of that product is owned by the people who are simply owning
instruments of production. That is what is meant by commons-based peer
production. But that’s exactly what the anarchist and the socialist call communism. There is no actual difference. Communism in a text book example
is the state less, property-less society. And that’s what it means, commonsbased peer production is a neologism, a modern way of saying communism
because for political reasons, post-war rhetoric, these words are verboten
and you can’t say them. So people invent new words, but they’re saying
exactly the same things. The point is that producers require land and capital to produce. If certain private interest controls all of the access of direct
producers to land and capital, then those private interests can extract the
surplus value. Another great quote from Benjamin Tucker is whenever one
person earns without sweating ... ehm sorry, whenever one person earns without
sweating, another person sweats without earning and that’s fundamentally true.
If anybody is earning revenue simply by owning instruments of production,
that means that people actually producing are not capturing the value of
their labour. And that’s what commons-based peer production is. The idea
that we have a commons which is all of our property, nobody controls our
instruments of production, they’re all our property together. Each of us
192

have our labour and we apply that to the commons and we produce something and whatever we produce, that is ours. It’s our own, provided that we
are not taking anything away from anybody else, provided that we are not
taking any exclusive control of the commons.
In the case of Free Software development, the Free Software itself is a commons. But things that you might make with Free Software are not part of
the commons, they’re your own. But the problem with software itself is
that because software is immaterial and therefore has no reproduction costs,
it can be reproduced with no costs, it also has no exchange value. So in
order to convert it to exchange value you always have to apply other forms of
property: land, capital, hard fixed property ... And so, as commons-based
peer producers in the Yochai Benkler world, we have our little internal communism, but we can neither live in it nor feed ourselves with it. So in order
to actually sustain ourselves, to actually capture our material subsistence, we
then have to deal with people that own land an capital; fixed, scarce properties, and we have no leverage in that negotiation. The only things we can
get back from the people that consume the output of our labour, is our
reproduction costs and nothing more, while they continue to capture and
accumulate the extra value. Again, how that applies to design is another
thing, I don’t think you can isolate one kind of worker from the overall
thing. The point is you have to think of where is the value coming from,
what are you really selling? Because you’re not really selling design, design
is an input. What are you really ...
What do you mean with ‘design is an input’?

Design is an input. The average consumer doesn’t buy design. Nobody
goes to a store and says I’d like a design. They only want the design because
they want another product that has design as an input of that product. If
you’re making beer and you need a label, you find a designer to make the
label. But what you’re selling is beer, you’re not selling design. So you always
have to think about what are you really selling. What is the actual product
that people is exchanging for, what is the source for the exchange value.
And once you identify the source of the exchange value, you have to figure
out how to create a direct relationship with all the other producers that are
involved in the production cycle.
...

Seems incredibly difficult ...

If it was easy then capitalism would have been overthrown centuries ago

193

... You’re now owning a magazine already with a couple of people. The
next person asks you to design a beer label ...
You have to own the beer factory!

... And I think next you should own the paper company that makes ...

And then you need people and say I know how to make design, I need
some people who know how to make beer. So then we have a beer factory.
And then you need people who drink the beer! Who’s going to make the
people that drink the beer?
Haha.

But wait, there must be a little bit of difference, a modified option to
this. For example ...

In the scenario of commons-based peer production it’s not that the designers have to own the beer factory, it’s just that there can’t be any capitalist
in the middle that owns the land, it’s enough if the designers and the beer
makers both own the land together and the capital together ...
So if the beer company is also worker-owned and you come to an arrangement ... Isn’t it the idea of shares? Applying labour and therefore having shares on something ...

Yes, but it has to be equal. Shares in a capitalist system are unequal.
That’s the idea of copy-far-left. It’s the idea of a public license that allows
free use for non-alienated forms of production and denies free use for alienated forms of production. In the case of software, for instance, which is
not the greatest application of copy-far-left, but is a good example to understand, the software would be usable by a workers’ cooperative for free
but a private corporation employing wage labour and private capital couldn’t
use it for free. They would have to either not use it at all or negotiate a
different set of terms under which they could use it. So the question is
how do we remove coercive property relationships. If you really have a situation of commons-based peer production, or communism, where there is
no state, no property, the instruments of production are collectively owned,
people just work together in a very kind of free way, than it could certainly
work. But that’s not the world we are living in, so we have to be defensive
of our commons and how we produce in order for it to grow. We have
to think about where the exchange value is and think about where the use
194

value crosses into exchange value and make sure that the point is within our
boundary. If we can do that, that’s enough. If we have a worker-owned
design collective that works with a worker-owned beer company, that’s as
good as together owning a beer company. But only if they also live on land
and apartments that are also worker-owned, because otherwise the landlord will simply capture value; you have to look for the point of leakage.
Even with a workers’ design company and a workers’ beer company living
in Brussels renting from capitalist, then the people that own the apartment
and the land will simply capture all the surplus value. The surplus value
will always leak at the point of scarcity, so the system has to be complete,
what Marcel Mauss calls a ‘total system’. It has to be a total system, if it
is not, if the entire cycle of production doesn’t go through commons-based
peer production hands, then it’s going to leak at the first point of scarcity.
Then whoever privately controls the one scarce resource through which all
this cycle of production goes through, will capture all the surplus value.
Again, back to our very basic model. The price of anything is its reproduction cost, so the price of something that is immaterial is zero. So, since
the beginning of mechanical reproduction, property-based interest groups
have tried to create artificial barriers to production. When you have artificial
barriers to reproduction the immaterial assets start to behave like material
assets; this is where copyright and intellectual property come from. It’s
the desire of property groups, to make immaterial assets behave price-wise
the same as material assets, the only way to do that is creating barriers to
reproduction.
Typography obviously comes from this culture, like a lot of other media
culture. There is rules about how you can reproduce it, and it creates
the opportunity for the owners of these things to capture exchange value.
Because the reproduction costs are no longer zero, because of artificial costs
of reproduction. But in certain things the capitalists are not homogeneous,
there’s not just one group of capitalists. There is many different capitalists.
Even though some make their living from typography, many more capitalists make their living by using typography, so with typography as an input.
From the point of view of those capitalists, the ones trying to restrict the
reproduction of typography are a problem. So if they can hire their own
staff and develop free typography with other companies, they’re not selling
typography, that’s just an input for them. Like for standardized nuts and
bolts, one time this was true too, bolt-makers would make their nuts and
bolt not fit, in the sense that if you wanted to use a nut from one company
195

and a bolt from another you couldn’t do so. They tried to create a barrier
from this, but since the nuts and bolts industry is not the biggest in capital,
because capital itself need nuts and bolts, the other companies got together
and said wait a minute, let’s just have standardized nuts and bolts, we don’t
want to make our money from nuts and bolts, we want to make our money
off-stream, from the product we make from nuts and bolts. Typography
falls into the same system. I imagine most of the people that are creating
free typography work for companies and they have their salary paid by companies that use typography, not companies that sell typography. Companies
that actually use typography in other production, whether it’s publishing or
whatever else they’re making, so the reproduction costs of the typographers
is paid for by not controlling the typography itself, but by employing it in
production and using it in another field. The people that are still trying
to hold on to typography as a product, as an end product that they capture
from intellectual property, are being pushed out.
In other things this is not just the case. If you look at the amount of money
that publishing companies spend on QuarkXpress, that’s not really a big
deal. From their point of view, they can hire some programmers and they
can make their own QuarkXpress and work with five other publishing companies, but the amount of money that they spend on QuarkXpress overall,
isn’t that high ...
Haha.

So the same economy of scale doesn’t apply. This is why commercial
software is still hanging on in these niche markets where there isn’t a broad
enough market. It’s not a broad enough input so that freedom is supported
by the users of it. Typography is a very general input. It’s like a nut or
a bolt, while QuarkXpress is pretty specific. Franziska was saying that in
her publishing company all they really need is two copies, or maybe one
even, of the software, and the whole company can work with it. They
just go to the computer with it when they need to do the layout, overall
it’s not a huge cost. They don’t need it every time they publish a book.
Whether if they had to pay for the font they used and every time they
wanted to use a different font, and they had to pay for it again, that would
be a problem, so they’d rather use a free font, and if that means hiring
somebody to drop the pixels down for a new font once and then having it
free forever, it can all make sense. That’s why typography is different from
software. And so the Scribus project has gone really far but the reason
196

it’s obscure is because except from the ideological case, they don’t have a
business case they can make for the publishers. Because for publishers they
want a piece of software that works and if it costs 400$ once, who cares.
It doesn’t really affect their business model. You have to make the case for
the publishers that if you form an association of all the publishers and you
together develop some new Free Software to do publishing, that would be
better and cheaper and faster. Then maybe eventually this case would be
made and something like this would exist, but it’s not like an operating
system or a web browser, that is really used everywhere all the time, and
would be really inconvenient to pay for every time. If companies had to pay
every single time they put a web browser on their computer, that would be
very inconvenient for them. Even Microsoft doesn’t dare to charge money
for Internet Explorer, cos they know people would just say Fuck off. They’re
not going to buy it. In more obscure areas, like publishing, 3D animation,
film and video, it doesn’t make so much of a difference. In those business
models, for instance 3D animation, one of the biggest companies is Pixar.
They make the movies! They don’t make the software, they go all the way
through the process and they make the movie! So they completely own
everything. For that reason it makes sense for them, since they capture the
full value of their product in the end, because they make the movies, that
their software enables them to make. And this would be a good model
for peer production as well, except obviously they’re a capitalist organization
and they exploit wage labour. But basically if Scribus really wanted to have a
financial base, the people that develop Scribus would have to own a magazine
that is enabled by Scribus. And if they can own the magazine that Scribus
enables then they can capture enough of that value to fund the development
of Scribus, and it would actually develop very quickly and be very good,
because that’s actually a total system. So right from the software to the
design, to the journalism, to the editing, to the sale, to the capture of the
value of the end consumer. But because it doesn’t do that, they’re giving
Free Software away ... To who? Where is the value captured? Where is the
use value transferred into exchange value? It’s this point that you have to get
all the way to, and if you don’t make it all the way there, even if you stop a
mile short, in that mile all of the surplus value will be sucked out.

197

This conversation took place in Montreal at the last day of
the Libre Graphics Meeting 2011. In the panel How to
keep and make productive libre graphics projects?, Asheesh
had responded rather sharply to a remark from the audience that only a very small number of women were
present at LGM: Bringing the problem back to gender is
avoiding the general problem that F/LOSS has with social
inclusion. Another good reason to talk to him were the
intriguing ‘Interactive training missions’ that he had been
developing as part of the OpenHatch.org project. I wanted
to know more about the tutorials he develops; why he decided to work on ‘story manuals’ that explain how to report a bug or how to work with version control. Asheesh
Laroia is someone who realizes that most of the work
that makes projects successful is hidden underneath the
surface. He volunteered his technical skills for the UN
in Uganda, the EFF, and Students for Free Culture, and
is a developer on the Debian team. Today, he lives in
Somerville, MA. He speaks about his ideas to audiences
at international F/LOSS conferences.
The interactive training missions are really linked to the background of
the OpenHatch project itself. I started working on it because to my mind,
one of the biggest reasons that people do not participate in Free Software
projects, is that they either don’t know how or don’t feel included. There is
a lot you have to know to be a meaningful contributor to Free Software and
I think that one of the major obstacle for getting that knowledge, and I am
being a bit sloppy with the use of the term maybe, is how to understand a
conversation on a bug-tracker for example. This is not something you run
into in college, learning computer science or any other discipline. In fact,
it is an almost anti-academic type of knowledge. Bug tracker conversations
201

are ‘just people talking’, a combination of a comment thread on a blog and
actual planning documents. There’s also tools like version control, where
close to no one learns about in college. There is something like the culture
of participating in mailing lists and chatting on IRC ... what people will
expect to hear and what people are expecting from you.
For people like me that have been doing all these things for years, it feels
very natural and it is very easy to forget all the advantages I have in this
regard. But a lot of the ways people get to the point where I am now
involves having friends that help out, like Hey, I asked what I thought was a
reasonable question on this mailing list and I did not get any answer or what
they said wasn’t very helpful. At this stage, if you are lucky, you have a friend
that helps you stay in the community. If you don’t, you fall away and think
I’m not going to deal with this, I don’t understand. So, the training missions
are designed to give you the cultural experience and the tool familiarity in an
automated way. You can stay in the community even when you don’t have a
friend, because the robot will explain you what is going on.

So how do you ‘harvest’ this cultural information? And how do you bring it into
your tool?

There is some creative process in what I call ‘writing the plot’; this is very
linear. Each training mission is usually between three and fifteen minutes
long so it is OK to have them be linear. In writing the plot, you just imagine
what would it take a new contributor to understand not only what to do, but
also what a ‘normal community member’ would know to do. The different
training missions get this right to different extents.

How does this type of knowledge form, you think? Did you need to become a kind
of anthropologist of Free Software? How do you know you teach the right thing?
I spend a lot of time both working with and thinking about new contributions to Free Software. Last September I organized a workshop to teach
computer science students how to get involved in Open Source. And I have
also been teaching interpersonally, in small groups, for ten or eleven years.
So I use the workshops to test the missions and than I simply ask what
works. But it is tough to evaluate the training missions through workshops
because the workshops are intended to be more interpersonal. I definitely
had positive feedback, but we need more, especially from people that have
been two or three years involved in the Free Software community, because
202

they understand what it feels like to be part of a community but they may
still feel somewhat unsure about whether they have everything and still remember what was confusing to learn.

I wasn’t actually asking about how successful the missions are in teaching the
culture Free Software ... I wanted to know how the missions learn from this
culture?
So far, the plots are really written by me, in collaboration with others. We
had one more recent contribution on Git written by someone called Mark
Freeman who is involved in the OpenHatch project. It did not have so
much community discussion but it was also pretty good from the start. So
I basically try to dump what is in my head?

I am asking you about this, thinking about a session we once organized at
Samedies, a woman-and-Free-Software group from Brussels. We had invited
someone to come talk to us about using IRC on the command-line and she was
discussing etiquette. She said: On IRC you should never ask permission before
asking a question. This was the kind of cultural knowledge she was teaching us
and I was a bit puzzled ... you could also say that this lack of social interfacing
on IRC is a problem. So why replicate that?
In Debian we have a big effort to check the quality of packages and maintaining that quality, even if the developer goes away. It is called the ‘Debian
QA project’ and there’s an IRC channel linked to that called #debian-qa.
Some of the people on that channel like to say hello to each other and
pay attention when other people are speaking, and others said stop with all
the noise. So finally, the people that liked saying hello moved to another
channel: #debian-sayhi.

Meaning the community has made explicit how it wants to be spoken to?

The point I am trying to make here, is that I am agreeing to part of what
you are saying, that these norms are actually flexible. But what I am further
saying, is that these norms are actually being bent.

I would like to talk about the new mission on bug reporting you said you were
working on, and how that is going. I find bug reports interesting because if
they’re good, they mix observation and narration, which asks a lot from the
imagination of both the writer and the reader of the report; they need to think
203

themselves in each others place: What did I expect that would happen? What
should have happened? What could have gone wrong? Would you say your
interactive training missions are a continuation of this collective imaginary work?

A big part of that sort of imagination is understanding the kinds of things
that could be reasonable. So this is where cultural knowledge comes in. If
you program in C or even if you just read about C, you understand that
there is something called ‘pointers’ and something called ‘segfaults’ and if
your program ends in that way, that is not a good thing and you should
report a bug. This requires an imagination on the side of the person filing
the bug. The training missions give people practice in seeing these sorts of
things and understand how they could work. To build a mental model, even
if it is fuzzy, that has enough of the right components so they can enter in
discussion and imagine what happened.
Of course when there are real issues such as groping at conferences, or
making people feel unwelcome because they are shown slides of half-naked
people that look like them ... that is actually a gender issue and that needs
to be addressed. But the example I gave was: Where are the Indians, where
are the Asians in our community? This is still a confusing question, but not
awkward.

Why is it not awkward?

(laughs) As I am an Indian person ... you might not be able to tell from the
transcription?
It is an easy thing to do, to make generalizations of categories of people
based on visible characteristics. Even worse, is to make generalizations about
all individual people in that class. It is really easy for people in the Free
Software community to subconsciously think there are no women in the
room ‘because women don’t like to program’, while we know that is really
not true. I like to bring up the Indian people as an example because there
are obviously a bunch of programmers in India ... the impression that they
can’t program, can’t be the reason they are excluded.

But in a way that is even more awkward?

Well, maybe I don’t feel it is that awkward because I see how to fix it, and I
even see how to fix both problems at the same time.
204

In Free Software we are not hungry for people in the same way that corporate
hiring departments are. We limp along and sometimes one or two or three
people join our project per year as if by magic and we don’t know how and
we don’t try to understand how. Sometimes external entities such as Google
Summer of Code cause many many more show up at the doorstep of our
projects, but because they are so many they don’t get any skills for how to
grow. When I co-ran this workshop at the computer science department at
the University of Pennsylvania on how to get involved in Open Source, we
were flooded with applicants. They were basically all feeling enthusiastically
about Open Source but confused about how to get involved. 35% of the
attendees were women, and if you look at the photos you’ll see that it wasn’t
just women we were diverse on, there were lots of types of people. That’s
a kind of diversity-neutral outreach we need. It is a self-empowerment
outreach: ‘you will be cooler after this, we teach you how to do stuff ’ and
not ‘we need you to do what we want you to do’, which is the hiring-kind
of outreach.

And why do you think Free Software doesn’t usually reach out in this way? Why
does the F/LOSS community have such a hard time becoming more diverse?

The F/LOSS community has problems getting more people and being more
diverse. To me, those are the same problems. If we would hand out flyers
to people with a clear message saying for example: here is this nice vector
drawings program called Inkscape. Try it out and if you want to make it even
better, come to this session and we’ll show you how. If you send out this
invitation to lots of people, you’ll reach more of them and you’ll reach more
diverse people. But the way we do things right now, is that we leave notes
on bug trackers saying: help wanted. The people that read bug trackers, also
know how to read mailing lists. To get to that point, they most likely had
help from their friends. Their friends probably looked like them, and there
you have a second or third degree diversity reinforcement problem. But
leaving gender diversity and race diversity aside, it is such a small number of
people!

So, to break that cycle you say there is a need to externalize knowledge ... like
you are doing with the OpenHatch project and with your project ‘Debian for
Shy People’? To not only explain how things technically work, but also how they
function socially?
205

I don’t know about externalizing ... I think I just want to grow our community. But when I feel more radical, I’d say we should just not write ‘How
to contribute’ pages anymore. Put a giant banner there instead saying: This
is such a fun project, come hang out with us on IRC ... every Sunday at 3PM.
Five or ten people might show up, and you will be able to have an individual
conversation. Quickly you’ll cross a boundary ... where you are no longer
externalizing knowledge, but simply treat them as part of your group.
The Fedora Design Bounties are a big shining example for me. Maírín Duffy
has been writing blog posts about three times a year: We want you to join
our community and here is something specific we want you to do. If you get it
right, the prize is that you are part of our community. The person that you get
this way will stick around because he or she came to join the community.
And not because you sent a chocolate cake?

Not for the chocolate cake, and also not for the 5000$ that you get over
the course of a Google summer of code project. So, I question whether it
is worth spending any time on a wiki-page explaining ‘How to contribute’
when instead you could attract people one by one, with a 100% success-rate.

Writing a ‘How to contribute’ page does force teams to reflect on what it takes to
become part of their community?
Of course that is true. But compared to standing at a job-fair talking to
people about their resume, ‘How to contribute’ pages are like anonymous,
impersonal walls of text that are not meant to create communication necessarily. If we keep focusing on communicating at this scale, we miss out on
the opportunity to make the situation better for individual people that are
likely to help us.

I feel that the Free Software community is quite busy with efficiency. When you
emphasize the importance of individual dialogue, it sounds like you propose a
different angle, even when this in the end has the desired effect of attracting more
loyal and reliable contributors.

It is amazing how valuable patience is.

You talked about Paul, the guy that stuck around on the IRC channel saying hi
to people and than only later started contributing patches after having seen two
or three people going through the process. You said: If we had implied that this
206

person would only be welcome when he was useful ... we would have lost
someone that would be useful in the future.

The obsession with usefulness is a kind of elitism. The Debian project
leader once made this sort of half-joke where he said: Debian developers
expect new Debian contributors to appear as fully formed, completely capable
Debian developers. That is the same kind of elitism that speaks from You
can’t be here until you are useful. By the way, the fact that this guy was some
kind of cheerleader was awesome. The number of patches we got because
he was standing there being friendly, was meaningful to other contributors,
I am sure of it. The truth is ... he was always useful, even before he started
submitting patches. Borrowing the word ‘useful’ from the most extreme
code-only definition, in the end he was even useful by that definition. He
had always been useful.

So it is an obsession with a certain kind of usefulness?
Yes.

It is nice to hear you bring up the value of patience. OSP uses the image of a
frog as their logo, a reference to the frog from the fairy tale ‘The frog and the
princess’. Engaging with Free Software is a bit like kissing a frog; you never know
whether it will turn into a prince before you have dared to love it! To OSP
it is important not to expect that things will go the way you are used to ... A
suspension of disbelief?

Or hopefulness! I had a couple of magic moments ... one of the biggest
magic moments for me was when I as a high school student e-mailed the
Linux kernel list and than I got a response! My file system was broken,
and fsck-tools were crashing. So I was at the end of what I could do and
I thought: let’s ask these amazing people. I ended up in a discussion with
a maintainer who told me to submit this bug-report, and use these dump
tools ... I did all these things and compiled the latest version from version
control because we just submitted a patch to it. By the end of the process
I had a working file system again. From that moment on I thought: these
magic moments will definitely happen again.
If you want magic moments, than streamlining the communication with your
community might not be your best approach?
207

What do you mean by that?

I was happy to find a panel on the program of LGM that addressed how this
community could grow. But than I felt a bit frustrated by the way people were
talking about it. I think the user and developer communities around Libre
Graphics are relatively small, and all people actually ask for, is dialogue. There
seems to be lots of concern about how to connect, and what tools to use for that.
The discussion easily drifts into self-deprecating statements such as ‘our website is
not up-to-date’ or ‘we should have a better logo’ or ‘if only our documentation
would be better’. But all of this seems more about putting off or even avoiding
the conversation.
Yes, in a way it is. I think that ‘conversations’ are the best, biggest thing
that F/LOSS has to offer its users, in comparison with proprietary software.
But a lot of the behavioral habits we have within F/LOSS and also as people
living in North America, is derived from what we see corporations doing.
We accept this as our personal strategies because we do not know any alternatives. The more I say about this, the more I sound like a hippie but I
think I’ll have to take the risk (laughs).
If you go to the Flash website, it tells you the important things you need to
know about Flash, and than you click download. Maybe there is a link to a
complex survey that tries to gather data en masse of untold millions of users.
I think that any randomly chosen website of a Libre Graphics project will
look similar. But instead it could say when you click download or run the
software ... we’re a bunch of people ... why don’t you come talk to us on IRC?
There are a lot people that are not in the conversation because nobody ever
invited them. This is why I think about diversity in terms of outreach, not
in terms of criticizing existing figures. If in some alternate reality we would
want to build a F/LOSS community that exists out of 90% women and
10% men, I bet we could do it. You just start with finding a college student
at a school that has a good Computer Science program ... she develops a
program with a bunch of her friends ... she puts up flyers in other colleges
... You could do this because there are relatively so little programmers in
the world busy with developing F/LOSS that you can almost handpick the
diversity content of your community. Between one and a thousand ... you
could do that. There are 6 million thousand people on this planet and the
amount of people not doing F/LOSS is enormous. Don’t wring your hands
about ‘where are the women’. Just ask them to join and that will be that!
208

Tying the story to data

In the summer of 2010, Constant commissioned artist and
researcher Evan Roth to develop a work of his choice, and
to make the development process available in some way.
He decided to use a part of his fee as prize-money for
The GML-Recorder Challenge, inviting makers to propose an Open Source device ‘that can unobtrusively record
graffiti motion data during a graffiti writer’s normal practice in the city’. In three interviews that took place in
Brussels and Paris within a period of one and a half years,
we spoke about the collaborative powers of the GMLstandard, about contact points between hacker and graffiti
cultures and the granularity of gesture.
Based on conversations between Evan Roth (ER), Femke
Snelting (FS), Peter Westenberg (PW), Michele Walther
(MW), Stéphanie Villayphiou (SV), John Haltiwanger (JH)
and momo3010.
Brussels, July 2010
ER
FS

So what should we talk about?

Can you explain what GML stands for?

GML stands for Graffiti Markup Language 1 . It is a very simple fileformat designed for amateur programmers. It is a way to store graffiti
motion data. I started working with graffiti writers, combining graffiti
and technology back in New York, in 2003. In graduate school, my thesis
ER

1

Graffiti Markup Language (.gml) is a universal, XML based, open file format designed to
store graffiti motion data (x and y coordinates and time). The format is designed to maximize
readability and ease of implementation, even for hobbyist programmers, artists and graffiti
writers. http://www.graffitimarkuplanguage.com

213

Tying the story to data

was on graffiti analysis, and writing software that could capture their
gestures, to archive motion data from graffiti writers. Back than I was
saving the data in an x-y-time array, I was calling them .graph files and I
sensed there was something interesting about the data, the visualization
of motion data but I had never opened up the project at that time.
About a year ago I released the second part of the project, of which the
source code was open but the dataset wasn’t. In conversation with a
friend of mine named Theo 2 , who also collaborated with me on the
L.A.S.E.R. Tag project 3 , he brought up the .graph file again and how
we could bring back the file format as a way to connect all these different applications. Graffiti Analysis 4 , L.A.S.E.R. Tag, EyeWriter 5 ... so I
worked with Theo Watson, Chris Sugrue 6 and Jamie Wilkinson 7 and
other people to develop Graffiti Markup Language. It is a simple set of
guidelines, basically an .xml file format that saves x-y-time data but does
it in a way that is very specifically related to graffiti so there’s a drip tag
and there’s tags related to the size of the brush and to how many strokes
you have: is it one stroke or two strokes or three strokes.
The main idea is: How do you archive the motion of graffiti and not just
the way graffiti looks. There are a lot of people photographing graffiti,
making documentaries etc. but there hasn’t been a way to archive graffiti
in ways of code yet.
FS

What do you mean, ‘archive in terms of code’?

There hasn’t been a programmatic way to archive graffiti. So this
is like taking a gesture and trying to boil it down to a set of coordinate
points that people can either upload or download. It is a sort of midpoint
between writers and hackers. Graffiti writers can download the software
and have how-to guides for how to do this, they can digitize their tags
ER

2
3

4
5
6
7

Theo Watson http://www.theowatson.com
In its simplest form, L.A.S.E.R. Tag is a camera and laptop setup, tracking a green laser
point across the face of a building and generating graphics based on the laser’s position which
then get projected back onto the same building with a high power projector.
http://graffitiresearchlab.com/projects/laser-tag
Graffiti Analysis is a digital graffiti blackbook designed for documenting more than just ink.
http://graffitianalysis.com
The EyeWriter is a low-cast eyetracking system originally designed for paralyzed graffiti artist
TEMPT. The EyeWriter system uses inexpensive cameras and Open Source computer vision
software to track the wearer’s eye movements. http://www.eyewriter.org
Chris Sugrue http://csugrue.com
Jamie Wilkinson http://www.jamiedubs.com

214

Tying the story to data

and upload it to an open database. The 000000book-site 8 hosts all this
data and some people are writing software for this.

So there are three parts: the GML-standard, software to record and
play and than there is the data itself – all of it is ‘open’ in some way. Could
you go through each of them and talk about how they produce uploads and
downloads?
FS

Right. It starts with Graffiti Analysis. It is software written in C++
using OpenFrameworks, an Open Source platform designed by artists for
visual applications. Right now you can download the recorder app and
from that you can generate your own .gml files. And from there you can
upload these files into the playback app. In the beginning that was the
only Open Source side of the project. Programmers could also make new
applications based on the software, which also happened.
Last night we met Stéphane Buellet 9 who is developing a calligraphy
analysis project and he used Graffiti Analysis as a starting point. I find it
exciting when that happens but more often people take the file-format as
a starting point, and use it as a jumping-off point for making their own
work.
Second was the database. We had this file-format that we loosely defined.
I worked with Jamie to develop the 000000book site. It is pretty nutsand-bolts but you can click ‘upload’ and click on your own .gml files and
it will playback in the browser. People have developed their own playback
mechanisms, which are some of the first Open Source collaborations that
happened around .gml files. There is a user account and you can upload
files; people have made image renderers, there are people that have made
Flash players, SVG players. Golan Levin has developed an application
that converts a .gml file into an auto-CAD format. The 000000book site
is basically where graffiti writers connect to developers.
In the middle between Graffiti Analysis and database is the Graffiti Markup
Language, that I think will have its own place on the web. But sometimes
ER

8

9

http://000000book.com. Pronounced: ‘Black Book’: ‘A black book is a graffiti artist’s
sketchbook. Often used to sketch out and plan potential graffiti, and to collect tags from
other writers. It is a writer’s most valuable property, containing all or a majority of the
person’s sketches and pieces. A writer’s sketchbook is carefully guarded from the police and
other authorities, as it can be used as material evidence in a graffiti vandalism case and link a
writer to previous illicit works.’
Wikipedia. Glossary of graffiti — wikipedia, the free encyclopedia, 2014. [Online; accessed 5.8.2014]

Stéphane Buellet, Camera Linea http://www.chevalvert.fr/portfolio/numerique/camera-linea

215

Tying the story to data

I see it as one project. One of my interests is in archiving graffiti and all
of these things are ways of doing that. It is interesting how these three
things work together. In terms of an OS development model it has been
producing results I haven’t seen when I just released source code.
FS

How do you do that, develop a standard for graffiti?

We started by looking at Graffiti Analysis and L.A.S.E.R. Tag, the
apps that were using graffiti motion data. From those two projects I had a
lot of experience of meeting graffiti writers as a userbase. When you meet
with them, they tell you right away what pieces of the software they think
are missing. So from talking with them we developed a lot of features
that now are in GML like brushes, drips, line-thickness. Some people
had single line tags and some people had multi-line tags so that issue
came up because GML tracks both drawing and non-drawing motion so
we knew that we needed in the file format to talk about pen up and pen
down. I was interested in the connection points between lines also.
We tried to keep it very stripped down. From the beginning we knew
that people that would participate as developers or anonymous contributors were not going to be the same people that would develop a Linux
core. They are students, people just getting into programming or visual
programming. We wanted people to be able to double-click a .gml file
and than everything should verbally make sense so it is Begin stroke.
End stroke. Anyone with basic programming skills should be able to
figure out what’s going on.
ER

Did you have any moment where you had to decide: this does not belong
to graffiti or: this might be more for calligraphy tracking?
FS

The only thing that has to be in there is the format in x-y time
scenario with some information on drawing and not drawing, everything
else is bonus. So if you load an .xml file structured like that, compliant
apps will load it in. On top of that, there are features that some apps
will want and others not. Keywords are, for example, a functionality that
we are still developing applications for. It is there but we are looking for
how to use it.
ER

FS

Did you ever think about this standard as a way to define a discipline?

(laughs) I think in the beginning it was a very functional conversation.
We were having apps running this data and I don’t think we were thinking
ER

216

Tying the story to data

of defining graffiti when we were writing the format. But looking back,
it is interesting to think about it.
Graffiti has a lot of privacy issues related to it too, right? So we did
discuss about what it would mean to start recording geo-located data.
There are different interests in graffiti. There is an interest in visuals and
in deconstructing characters. Another group is interested in it, because
it is a sport and more of a performance art. For this type of interest, it
is more important to know exactly where and when it happened because
it is different on a rooftop in New York to a studio in the basement of
someones house. But if someone realizes this data resulted from an illegal
action, and wanted to tie it back to someone, than it starts to be like
a surveillance camera. What happens when someone is caught with a
laptop with all this data?
FS

Your desire to archive, is it also about producing new work?

I see graffiti writers as hackers. They use the city in the same way
as hackers are using computer systems. They are finding ways of using
a system to make it do things that it wasn’t intended to do. I am not
sure graffiti writers see it this way, but I am in this position where I have
friends that are hackers, playing around with digital structures online.
Other friends are into graffiti writing and to me those two camps are
doing the most interesting things right now, but these are two communities that hardly overlap. One of the interests I have is making these
two groups of people hang out more. I was physically the person bridging these two groups; I was the nerd person meeting the graffiti writers
talking to them about software and having this database.
Now it is not about my personal collection anymore, it is making a handshake between two communities; making them run off with each other
and having fun as opposed to me having to be there all the time to make
introductions.
ER

Is GML about the distribution of signature? I mean: The gestures of
a specific person can now be reproduced by a larger community. How does
that work?

FS

This is an interesting conversation we should have with the graffiti
writers. A tag might be something they have been writing for more than
25 years and that will be very personal to them and the way they write
this is because they’ve written it a million times. So at the one hand it
ER

217

Tying the story to data

is super-personal, but on the other hand a lot of graffiti writers have no
problem sharing this data. To them it is just another tag. They feel like,
I have written this tag a billion times and so when you want to keep one of
them, it is no big deal.
I don’t think the conversation has gotten as involved as it could have.
You set something in motion and cross your fingers hoping that everyone
plays nice and things go well and so far that is what has been happening.
But you are dealing with people that are uploading something that is super
personal to them and I’d be curious to see what happens in the future.
The graffiti taxonomy project that I have been doing involves a lot of
photos of graffiti. It is a visual studies based on characters, I am shooting
thousands of photos of graffiti and I don’t have an opportunity to meet
with all these writers to ask them if it is OK. So I get e-mails from writers
once in a while saying Hey, you used a photograph of one of my tags and
usually it is them feeling out where my intentions are and where I am
coming from.
It has taken a long time to gain the trust of the community I am working with. Usually when I am able to explain what I am doing and that
everything is released openly and meant to be completely free, so far at
least the people I have managed to talk toare OK with it and understand
it. Initially when people see something they’ve made being used by other
people, a lot of times it can be a point where a red flag is raised and I am
assuming there are more red flags going to go up.
FS

If you upload a .gml file, can you insert a licence?

Not yet. Right now there is not even a ‘private mode’ on the
000000book site. If you upload, everything is public. There is a lot of
interesting issues with respect to the licence that I have been reluctant to
deal with yet. Once you start talking too much about it, you will scare
off people on either side of the fence. I think that will have to happen at
some point but for now I have decided to refer to it as an ‘open database’
and I hope that people will play nicely, like I said.
ER

FS

But just imagine, what kind of licence would you need?

It might make more sense to go for a media-related licence than for
a code licence. Creative Commons licences would lend themselves easily
for this. People could choose non-commercial or pure public domain.
Does that make sense?
ER

218

Tying the story to data

Well, yes but if you look at the objects that people share, we’re much
closer to code than to a video file?
FS

is?

ER

Functionally it is code. But would a graffiti writer know what GPL

I am interested in the apprentice-system you were talking about earlier.
Like a young writer learning from someone else they admire. The GML
notation of x-y-time might help someone to learn as well. But would you
ever really copy someone else’s tag?
PW

One of the reasons I think graffiti writing has this history of apprenticeship is because you don’t really have a chance to learn otherwise. You
don’t turn on the TV and see someone else doing it. You only see how it
is being written if you see other people actually do it. That was one of the
original reasons I started doing graffiti research because, having met with
graffiti writers. I thought: it is a dance, it is as much about motion as
it is about how the final image is constructed. You can come to a much
better understanding about how it is made as opposed to just seeing a
photograph of it.

ER

If you want to learn from the person writing, you would need to see
more than just the trace of a pen?
PW

Someones tag might look completely different if they had six seconds
to make it, they make different decisions. In the first version of the
Graffiti Analysis project, I had one camera recorder tracking the pen and
another camera behind the hand and another so you could see the full
body. But there was something about tracking just the pen tip that I
liked. It is an easier point of entry for dealing with the motion data than
having three different video feeds.
ER

Maybe it is more about metadata? Not a question of device or application, but about space for a comment.
FS

Maybe in the keywords there will be something like: Rooftop.
Brooklyn. Arrested.
The most interesting part is often the stories that people tell afterward
anyway. So it is an interesting idea, how to tie the story to the data.
It is a design problem too. Historically graffiti has been documented
many times by outsiders. The movie Style Wars 10 is a good example of
ER

10

Style Wars. Tony Silver, 1983. http://www.stylewars.com

219

Tying the story to data

this epic documentary that was made by outsiders that became insiders.
Also, the people that have been documenting most of the graffiti are not
necessarily graffiti writers.
Graffiti has a history with documentarians entering into their community and playing a role but sharing the stories is something writers do
internally, not as much to outsiders. How do you figure out a way to get
graffiti writers to document their stories into the .gml files themselves,
or is it going to take outsiders? How does the format facilitate that?

Do you think the availability of a project like GML can have an impact
on the way graffiti is learned? If data becomes available in a community
that operates traditionally through apprenticeships and person-to-person
sharing, what does it do?
FS

I am interested in Open Source culture being influenced by graffiti,
and I am interested in Open Source culture influencing graffiti as well.
On a big picture I would love it if the graffiti community got interested
in these ideas and had more of a skill-sharing-knowledge-base.
KATSU 11 , someone I worked with in New York, has acquireda lot of
knowledge about how to make tools for graffiti and he initially wasn’t
so much into sharing them, because graffiti writers tend to save that
knowledge for themselves so that their tags are always bigger and better (laughs). Talking to him I think I convinced him to write tutorials on
how to make some of these tools. On the street art side there is Mark
Jenkins 12 , he has this technique of making 3D objects that exist within
the city and we had a lot of conversations too.
There are many ways tech circles and Open Source circles can come together with people that are making things outside, with their hands. I
think graffiti can learn from that. In the end people would be making
more things outside which would be a good thing.
ER

In a way typography has a similar culture of apprenticeship. Some
people enjoy spreading knowledge, and others resist in the name of quality
control.
FS

Interesting. I think the work I am doing is such a tangent! In general,
for something that is decidedly against the rules, the culture of writing
graffiti often has a rigid structure. To people in that community what
ER

11
12

KATSU http://www.flickr.com/search/?q=graffiti+katsu
Mark Jenkins tapesculptures http://tapesculpture.org

220

Tying the story to data

I do is a blip on their radar. I am honored when I get to meet graffiti
writers and they are interested in what I am doing but I don’t think it
will change anything in what is in some ways a very strict system.
And I don’t want that either. I like the fact that they found a way to make
spraypaint and markers change the way each city in the world looks. They
have the tools they need. Digital projectors will not change that. Graffiti
writers still like to see their names projected at big scales in new ways but
it is not something they really need (laughs).

And the other way around? How does graffiti have an influence on
Open Source communities?
FS

For the people on the technology side, it is an easy jump. To think
about hacking software systems and than about making things outside.
I see that with the Free Art and Technology Group 13 that I help run.
When they start thinking about projects in the city, it takes little to come
up with great ideas. I also see that in the class I teach, Urban Hacking.
There is already a natural overlap.
ER

FS

What connects the two?

It is really about the idea of hacking. The first assignment in the
class is not to make anything, but simply to identify systems in the city.
What are elements that repeat. Trying to find which ones you can slip
into. It has been happening in graffiti forever. Graffiti in New York in
the eighties was to me a hack, a way to have giant paintings circulating in
the city ... There is a lot of room to explore there.
ER

Your experience with the Blender community 14 did not sound like an
easy bridge?

FS

Recently I released a piece of software that translates a .gml file and
translates it into a .stl file, which is a common 3D format. So you can
basically take a graffiti gesture and import it into software like Blender.
I used Blender because I wanted to highlight this tool, because I want
these communities to talk to each other.
So I was taking a tag that was created in the streets of Vienna and pulling
it into Blender and in the end I was exporting it to something that could
ER

13
14

The Free Art and Technology (F.A.T.) Lab is an organization dedicated to enriching the
public domain through the research and development of creative technologies and media.
Release early, often and with rap music. http://fffff.at
Blender is a free Open Source 3D content creation suite. http://www.blender.org/

221

Tying the story to data

be 3D printed, to become something physical. The video that I posted intentionally showed online showed screenshots from Blender and it ended
up on one of the bigger community sites. I only saw it when my cousin,
who is a big Blender user, e-mailed me the thread. There is about a hundred dedicated Blender users discussing the legitimacy of graffiti in art
and how their tools are used 15 ; pretty interesting but also pretty conservative.
FS

Why do you think the Blender community responded in that way?

It doesn’t surprise me that much. Graffiti is hard to accept, especially
when we are talking about tags. So the only reason we might be slightly
surprised by hearing people in the Open Source community react that
way, is because intellectual property doesn’t translate always to physical
property. Writing your name on someone’s door is something people universally don’t like. I understand. For me the connection makes sense but
just because you make Open Source doesn’t mean you’ll be interested in
graffiti or street art or vice versa. I think if I went to a Blender conference
and gave a talk where I explained sort of where I see these things overlap,
I could make a better case than the three minute video they reacted to.
ER

What about Gesture Markup Language instead of Graffiti Markup
Language?
FS

Essentially GML records x-y-time data. If you talk about what it
functionally does, it is probably more related to gesture than it is to graffiti. There is nothing at the core specifically related to graffiti. I am
interested in branding it in relation to graffiti and to get people to talk
about Open Source where it is traditionally not talked about. To me
that is interesting. It is a way to get people excited about open data, and
popularizing ideas about Open Source.
ER

FS

Would you be OK if it would get more popular in non-graffiti circles?

I am super excited when I see it used in bizarre places. I’ll keep using
it for graffiti, but someone e-mailed me that they were upset that it only
tracks one point. There hasn’t been a need to track multiple tags at once.
They wanted to use it to track juggling, but how to track multiple balls
in the air? I keep calling it Graffiti Markup Language because I think it
is a good story.
ER

15

http://www.blendernation.com/2010/07/09/blender-graffiti-analysis

222

Tying the story to data

PW

What’s the licence on GML?

We haven’t really entered into that. Why would you need a licence
on a file format?
ER
FS

It would prevent that anyone could own the standard.

That sounds good. Actually it would be interesting for the project, if
someone would try to licence it. Legal things matter, but for the things I
do, I am most of all interested in getting the idea across.
ER

I am interested in the way GML stems from a specific practice. How
it is different and similar to large, legal, commercial, global standardization practices. Related, how can GML connect to other standard practices?
Could it be RDF compliant?

FS

PW

Gesture recognition to help out the police?

Or maps of places that are in need of some graffiti? How to link GML
to other types of data?
FS

It is hard for me to imagine something. But one thing is interesting
for example, how GML is used in the EyeWriter project. It has not
so much to do with gesture, but more with how you would draft in a
computer. TEMPT is plotting points, so the time data might not be so
interesting but because it is in the same format, the community might
pick it up and do something with it. All the TEMPT data he writes with
his eyes and it is uploaded to the 000000book site automatically. That
allowed another artist called Benjamin Gaulon 16 who I now know, but
didn’t know at the time, to use it with his Print Ball project. He took the
tag data from a paralyzed graffiti writer in Los Angeles and painted it on
a wall in Dublin. Eye-movement translated into a paint-ball gun ... that
is the kind of collaboration that I hope GML can be the middle-point
for. If that happens, things can start to extrapolate on either end.
ER

You talked about posting a wish-list and being surprised that your
wishes were fulfilled within weeks. Why do you think that a project like
EyeWriter, even if it interests a lot of people, has a hard time gathering
collaborators, while something much more general like GML seems to be
more compelling for people to contribute to?
FS

16

Benjamin Gaulon, Print Ball
http://www.eyewriter.org/paintball-shooting-robot-writes-tempt1-tag

223

Tying the story to data

I’ll answer that in a second, but you reminded me of something
else: because EyeWriter was GML based, a lot of the collaborations
that happened with people outside of the project were GML related,
not EyeWriter related. So we did have artists like Ben and Golan take
data drawn by TEMPT and do completely different things which made
TEMPT a collaborator with them in a way. The software allowed him to
share his work in a format that allowed other people to work with him.
The wish-list came out of the fact that I was working on a graffiti related
project that had a lot of use but not a lot of innovation. Not so many
people were using it in ways I wasn’t expecting, which is something you
always hope of course. By saying: Here’s the things I really would like to
happen, things started to happen. I have been surprised how that drove
momentum. Something similar I hope will happen to the work we will
do together in the next months too!
ER

FS

What are you planning to do?

We are planning to make a dedicated community page for the graffiti
markup language which is one of the three points of the triangle. The
second step would be a new addition to the wish-list, a challenge with a
prize associated to it which seems funny. The project I’d like to concentrate on is making the data collection easier so that graffiti writers can be
more active in the upload sense. Taking the NASA development model:
Can you get into orbit on this budget?
ER

How is that different from the way you record graffiti motion at the
moment?
FS

If I go out with a graffiti writer, I’m stuck standing with a laptop and
a camera facing the wall and then the graffiti writer needs to have a really
bright light attached to the writing device which is a bit counter-intuitive
when you are trying to do something without being seen (laughs). It
could be infrared by the way, that could be the first step but then security
cameras would still pick it up. The design I am focusing momentum on is
a system that’s easier. A system that can work without me there, without
having to have a laptop there. The whole idea is that it would be a natural
way to get good data, to document graffiti without a red-head holding a
laptop following you around the whole time!
ER

224

Tying the story to data

Paris, December 2010
FS

How is it to be the sole jury member?

I tried to get another jury-member on there actually. Do you know
Limor Fried? She runs Adafruit Industries. 17 I really like her work. She
works with her partner Phil Torrone who runs Make Blog. 18 I invited
her to be the second jury-member because she makes Open Source hardware kits; this is her full-time thing. She is very smart and has a lot of
background in making DIY kits that people actually build. She is also
very straightforward and very busy, so she wrote back and said: this is
too much work. No.
So ... yeah, I am the only jury member. Hmmm.
ER

SV

Is the contest already over?

It is not over. It was easy to launch; I tried to coincide it with the
launch of the website and there were a couple of things going on at the
same time. The launch helped spread the word about this file format, and
people making projects, and vice versa.
ER

Did you have any proposals that came close to meeting the challenge?
Did you consider giving out the prize?
FS

No.
There are a couple of people that got really close. The interesting thing
that is happening with the challenge is something that is also happening
to other high barrier projects: You end up speaking to the people you already work with the most. I have a hard time figuring out to some extent
what is really happening, but the things I hear, of people making progress,
is people that are close to me. It reminds me of the EyeWriter project
where people that are to dip their toes into this, are already in the friend
group, or one level removed. They are pretty high level programmers.
I didn’t really think that actual money would be such an incentive but
more that it would make the challenge feel serious, more in the sense
of an organization that has some kind of club behind it. If you solved
one of the design problems by the Mozilla community you could receive
ER

17
18

Limor Fried, Adafruit Industries http://www.adafruit.com
Phillip Torrone, Makezine http://makezine.com/pub/au/Phillip_Torrone

225

Tying the story to data

kudo’s from the community, but if you solved one of my projects, you
don’t really get kudo’s from my community, do you?
Having the money associated makes it this big thing. At Ars Electronica
and so on, it got people talking about it and so it is out there. That
part worked. Beyond that it has been a bit hard to keep the momentum.
Friends and colleagues send me ideas and ask me to look at things, but
people I don’t know are hard to follow; I don’t think they are publishing
their progress. There is a hackerspace in Porto that has been working on
it, so I see on their blog and Twitter that they are having meetings about
this and are working on it.
Don’t you think having only one prize produces a kind of exclusivity? It
seems logical not to publish your notes?
FS

ER Maybe. Kyle 19 has been thinking up ways to do it and I know he
wanted to use an optical mouse, and then this a friend Michael 20 has been
using sensors, and he ran into a software problem but had the hardware
problem more or less solved. And then Kyle, a software expert, has been
running into hardware problems and so I kind of introduced them to each
other over e-mail so I don’t know if they are working on it together.
FS

Would you consider splitting the prize?

I don’t care, but I don’t know if the candidates would consider splitting the prize! I know Michael has already spent a lot of money because
he has been buying Arduinos and other hardware. He wants to make
a cheap version to solve the problem and then make another one that
costs 150 on top of the price limitation to make it easier to use. He is
spending a bunch of money so even if he wins, it is going to get him only
out of the hole and he will not have much left.
Actually, Golan 21 had an idea for an iPhone app that he wants to make
but I am not sure it solves it.
ER

FS

Why don’t you think his app will solve it?

He is really interested in making something where you do not need
to meet with the graffiti writer. His idea was that if you could take a
photo of it on the wall, and then with your finger you guide it for how it
ER

19
20
21

Kyle McDonald http://kylemcdonald.net
Michael Auger http://lm4k.com
Golan Levin http://www.flong.com

226

Tying the story to data

was written. It has an algorithm for image processing and that combined
with your best guess of how it was written would be backed out in motion
data. But it is faked data.
FS

That it is really interesting!

Yes it is and I would love it if he would make it but I am not going to
let him win with it (laughs). I understand why he wants to do it; especially
if you are not inside the graffiti community, your only experience is what
you see on the wall and you don’t know who these people are and it is
going to be almost impossible to ever get data for those tags. If you don’t
have access to that community you are never going to get the tag of the
person that you really want. I like the idea that he is thinking about
getting some data from the wall as opposed to getting it from the hand.
ER

Learning by copying. Nowhere near solving the challenge, but interesting. OSP 22 we were discussing about the way designers are invited into
Open Source Software by way of contest. Troy James Sobotka 23 got angry
and wrote: We want to be part of this community, we don’t want to compete
for it.
FS

With the EyeWriter project, we were thinking a lot about that; how
to spur development. I think I would not have done a competition with
the EyeWriter. Making it fun, that is what makes it happen. If it would
be a really serious amount of money, with people scraping at each other,
fighting each other ...
For me, the fact that there is prize money makes something that is already
ridiculous in itself even more funny. To have prize money for such a small
community of people that are interested in coding and in graffiti. I’m not
seriously thinking that we can spur development with this kind of money.
To use the EyeWriter as an example, we’ve had money infusions from
awards mostly and we had to think about how we could use that money
to get from point A to point B. That’s also a project where we had very
ER

22
23

OSP (Open Source Publishing) is a graphic design collective that uses only Free, Libre and
Open Source software. http://ospublish.constantvzw.org
The very notion of Libre / Free software holds cooperation and community with such high regard
you would think that we would be visionary leaders regarding the means and methods we use to
collaborate. We are not. We seem to suffer from a collision of unity with diversity. How can we
more greatly create a world of legitimate discussion regarding art, design, aesthetic, music, and other
such diverse fields when we are so stuck on how much more consistent a damn panel looks with tripe
22 pixel icons of a given flavour?
http://www.librescope.com/975/spec-work-and-contests-part-two

227

Tying the story to data

definable design goals of what we wanted to reach, especially between the
first version and where we are now with the second version.
FS

How did that work?

We are not talking about a ton of money here, 10 to 20.000 , and
we tried to get as far as we could. We got almost no work done between
the meetings in LA but if we flew in, it was OK to take a week out of
our schedules and really hammer at it. We were trying to think how we
could do the same thing for people that we wanted to work with and who
we had met in conferences. So that is how we thought of spending that
money.
The other way we use money in the EyeWriter project is that we buy
people kits. We know a few people that are interested in hacking on it
but they don’t have the hardware. Not that they are so expensive, but
Zach wants to buy twenty or thirty unpackaged kits and he has interns
working with him in New York helping to build them. So we have these
systems ready so as soon as someone wants to get hacking on it, we can
mail them a working system that they can just plug in and they don’t
have to waste their time ordering all these parts from all these websites
all over China. And when they are done, they just send it back.
ER

You talked about some things in the challenge that worked and some
that didn’t.
FS

I think the forum is the obvious thing that did not work. I have
friends working on OpenFrameworks, it is headed primarily by Zach and
Theo. When you see that forum, it is very involved. It is a deep system,
with many different libraries and lots of code flying around. GML is really
not large enough.
I think what makes sense for this project is when I post news about the
project, I see it ripple in Google Alerts. For people working on it, having
a place where these things show up is already a lot. The biggest success
is the project space, to see all the projects happening.
ER

FS

What happened on the site since we talked?

A project I like, is kml2gml 24 for example. It is done by a friend from
Tokyo. He was gathering GPS data riding his bike around various cities,
and building up a font based on his path. I like projects like this, where
ER

24

Yamaguchi Takahiro http://www.graffitimarkuplanguage.com/kml2GML

228

Tying the story to data

someone takes a work that is already done and just writes an application
to convert the data into another format. To see him riding his bike played
back in GML was really nice. It is super low barrier to entry, he already
did all the hard work. I like that there is now a system for piping very
different kinds of data through GML.
FS

But it could also work the other way around?

Yeah. This is maybe a tangent but depending on how someone solves
the GML challenge ... I was discussing this with Mike (the person that is
developing the sensor based version). He was thinking that if you would
turn on his system, and leave it on for a whole night of graffiti writing,
you would have the gestural data plus the GPS data. You could make
a .gml file that is tracking you down the street, and zoom in when you
start making the tag. Also you would get much more information on
3D movement, like tilt and when the pen is picking up and going down.
Right now all I am getting is a 2D view through video data. I am really
keeping my fingers crossed. But he ran into trouble though.
ER

FS

Like what?

I have my doubts about using these kind of sensors, because ‘drift’ is
a problem. When you start using these sensors too long, it tends to move
a little bit. I think he is working within a 0.25 inch margin of error right
now, which is right on the edge. If you are recording someone doing a
big piece, this is not going to ruin my day too much but if you record a
little tag than it is a problem.
The other problem is that you need to orient the system before you start
tagging. It needs to know what is up and down, you have to define your
plane of access. I don’t really understand this 100% but he thinks he can
still fit it all within the ten second calibration requirement, he’s thinking
that each time you come to a wall, you tap once, you tap twice and tap a
third time to define what plane you are writing on and that calibrates the
3D space. Once you have that calibration done, you can start writing. It
is not as easy as attaching a motion sensor. The problem is hard.
ER

So you need to touch the wall before writing on it, feeling out the
playing field before starting! It is like working on a tablet; to move from
actual movement to instruction; navigation blends into the action of drawing
itself.
FS

ER

I like that!

229

Tying the story to data

SV

The guy using the iPhone did not use it as a sensor at all?

Theo was interested in using the iPhone to record motion data in
GML, but also to save the coordinates so you could try it into a Google
Earth or something but he had trouble with the sensitivity of the sensor.
Maybe it is better now but you needed to draw on a huge scale for one
letter. You could not record anything small.
ER

But it could be nice if you could record with a device that is less conspicuous.
FS

I know. I have just been experimenting with mounting cameras on
spray-cans. A tangent to GML, but related. It is not data, but video.
ER

What do you think is the difference between recording video, and
recording data? You mentioned that you wanted to move away from documentation the image to capture movement. Video is somehow indirect
data?
FS

Video is annoying in that it is computationally expensive. In Brazil 25
I have been using the laptop but the data is not very precise.
Kyle thinks he might be able to back out GML data from videos. This
might solve the challenge, depending on how many cameras you need and
how expensive they are. But so far I have not heard back from him. He
said it needs three different cameras all looking at the wall. I mean: talk
about computationally expensive! He likes video-processing, he knows
some Open Source software that can look for similar things and knows
how to relate them. To me it seems more difficult than it needs to be
(laughs).
ER

It is both overcomplicated and beautiful, trying to reverse engineer
movement from the image.
FS

I am getting more into video myself. I get more enjoyment from capturing the data than from the projections, like what most people associate
with my work.
ER

FS

Why is it so much more interesting to capture, rather than to project?

In part because it stays new, I’ve been doing those projections for a
while now and I know what happens at these events. For a while it was
very new, we just did it with friends, to project on the Brooklyn bridge
ER

25

Graffiti Analysis: Belo Horizonte, Brazil 2010 http://vimeo.com/16997642

230

Tying the story to data

for example. Now it has turned into these events where everyone knows
in advance, instead of just showing up at at a certain time ate a set corner.
It has lost a lot of its magic and power.
Michele and I have done so many of these projections and we sort of
know what to expect from it, what questions people will ask. When I
meet with graffiti writers, that almost always feels new to me. When we
went to Brazil, we intentionally tried to not project anything but to spend
as much time as possible with writers. Going out with graffiti writers to
me always feels right.

FS Is the documentation an excuse to be taken along, or is the act of
documenting itself interesting to you?

To me documentation is interesting. I don’t know where all of this
is going right now, I am just trying to get the footage; I put these pieces
together showing all this movement but I don’t really know what the final
project is. It is more about collecting data so I am interested in having
video, audio and GML that can be synced up, and the sound from these
microphones is something to do something with later. This is research
for me. I like the idea of having all this data related to a 10 second gesture.
I am thinking that in the future we can do interesting things with it. I
am even thinking about how the audio could be used as a signal to tell
you what is drawing and what is not drawing. It is a really analog way of
doing it, but in that way you don’t need a button where you are getting
true and false statements for what is drawing and what is not drawing;
you can just tell by the sound:
tfffpt ... tfffpt.
ER

FS

You can hear the space, and also the surface.

I got started doing this because I love graffiti and this is a way to
get closer to it again. Like getting back out to the streets and having
very personal relationships to the graffiti writers and talking to them,
and having them give feedback. I think that is how the whole challenge
started. It didn’t start because I was projecting, but because I was out on
the street and testing the capture, having graffiti writers nearby when it
is happening. It feels like things are progressing that way.
ER

Are you thinking of other ways of capturing? You talk about capturing
movement, but do you also archive other elements? Do you take notes,
pictures? What happens to the conversations you are having?
FS

231

Tying the story to data

I have been missing out on that piece. It is a small amount of time
we have, and I am already trying to get so much. I am setting up a
camera that shoots straight video from a tripod, I am capturing from the
laptop and I am also screencasting the application, my head is spinning.
One reason I screwed up this footage in the beginning is because with all
these things going on I forget to turn on some things. Maybe someone
will solve this challenge.
ER

FS

Are you actually an embedded anthropologist?

In the back of my head I am thinking this will become a longer documentary. I like to experiment with documentation, whether that is in
code or with video. I do think that there is this interesting connection
between documentation and graffiti and how these two things overlap.
I am always thinking about documentation. The graffiti writer that was
in Vienna 26 showed me a video that was amazing. It was him and a
friend going out on a sunny day at 15:30 in the afternoon with two head
mounted cameras, bombing an entire train and you hear the birds singing
and you only experience it by these two videos that are linked. There are
interesting constraints: your hands are already full, you don’t want peoples’ faces on camera so the head-mounted cameras were smart. Unless
you walk in front of a mirror (laughs).
ER

FS

Is it related to the dream of ‘self documenting code’?

I like that. Even doing the challenge is in a way a reflection on this,
how I am fighting to get GML back to the streets somehow, it has a
natural tendency to get closer to the browser, to the screen, and my job
is to get it back to the street. It is so sexy and fun and flashy and that is
important too. My job is to keep the graffiti influence on it as large as the
other part.
ER

FS
ER

Is any of this reflected in the standard itself?

I haven’t looked at the standard for a while now.

I was thinking again about live coding and notation. Simon Yuill 27
describes notation as a shared space that allows collaboration but also defines
the end of a collaboration.
FS

26
27

momo3010 http://momo1030.com
Simon Yuill. All problems of notation will be solved by the masses. Mute Magazine, 2008

232

Tying the story to data

Maybe using an XML-like structure was a bad idea? Maybe if I had
started with a less code-based set of rules? If the files were raw video,
it would encourage people to go outside more often? By picking XML
I am defining where the thing heads in a way. I think I am OK in the
role of fighting that tendency. It is not just a problem in GML but with a
lot of work I have been doing with graffiti and technology and even way
back with Graffiti Analysis, before GRL (Graffiti Research Lab), the idea
was always to keep the research very close to the people doing graffiti. I
was intentionally working with people bombing a lot and not with graffiti
celebrities. I wanted to work with who’s tag was on my mailbox, who’s
tag do I see a million times when I walk down the street. Since then
a lot has happened, like with more popular projects such as L.A.S.E.R.
Tag, and it goes almost always further away from graffiti. Maybe that is
a function of technology. Technology, or the way it is now, will always
drift towards entertainment uses, commercial uses.
ER

Do you think a standard can be subversive? You chose XML because it
is accessible to amateur programmers. But it is also a very formal standard,
and so the interface between graffiti writers and hackers is written in the
language of bureaucracy.
FS

ER (laughs) I thought that there was something funny with that. People
that know XML and the web, they get the joke that something so rigid
and standardized is connected to writing your name on the wall. But to
be honest, it was really just a pragmatic choice.

It reminds me of an interview 28 with François Chastanet who wrote a
book 29 about tagging in Los Angeles. He explains that the Gothic lettering
is inspired by administrative papers!
SV

I am wondering whether you’re thinking about the standard itself as
a space for hacking?
FS

Graffiti is somehow coded in-itself. Do you mean it would be interesting
to think how GML could be coded in a way for graffiti writers, not for
coders?
There would be more space for that when more people start to program at
a younger age? When it is more common knowledge. If I would start to do
ER

28
29

Interview with François Chastanet http://www.youtube.com/watch?v=ayPcaGVKJHg
François Chastanet, Cholo writing: Latino gang graffiti in Los Angeles. Dokument, 2009

233

Tying the story to data

that now, I would quickly lose my small user-base. I love that idea though;
the way XML is programmed fits very much to the way you program for the
web. But what if it was playing more with language, starting from graffiti
which is very coded?
When I was in college, I was always thinking about how to visualize
motion in print. I was looking for ways people had developed languages
for different ways of writing.
ER

Maybe you could look at the Chinese methods for teaching writing,
because the order of the strokes is really important. If you make the stroke
from bottom to top, and not from top to bottom, it is wrong.
SV

A friend in Hong Kong, MC Yan, loves the Graffiti Analysis project
because it shows the order in which he is writing and he likes to play
with that. So he writes words in different order than people are used to
and so it changes the meaning. People can not only watch the final result,
but also the order which is an interesting part of the writing process. The
brush, the angle, direction: depicting motion!
In the beginning of the Graffiti Analysis Research project I was very
against projection, because I felt that was totally against the idea of graffiti. I was presenting all of these print ideas and the output would be
pasted back into the city because I was against making an impermanent
representation of the data. In the end Zach said, you are just fighting this
because you have a motion project and you want to project motion and
then I said alright, I’ll do a test. And the tests were so exciting that I felt
OK with it.
ER

In what way does GML bridge the gap between digital drawing and
hand writing? Could you see a sort of computer-aided graffiti? Could you
see computation enter graffiti?
FS

Yeah. When you are in a controlled environment, in a studio, it is
easy but the outdoors part always trips me up. That is why the design
constraints get interesting, playing in real time with what someone is
writing. I think graffiti writers would be into that too. How to develop
a style that is unique enough to stand out in an existing canon is already
hard enough. This could give someone an edge.
ER

I think the next challenge I’d like to run is about recreating the data
outside. I’ve been thinking about these helicopters with embedded wireless
ER

234

Tying the story to data

camera’s, have you seen them? The obvious thing to me would be uploading
a .gml file to one of these helicopters that is dripping paint on a rooftop.
Scale is so important, so going bigger is always going to be better.
Gigantic rooftop tags could be a way to tie it back to the city, give it a
reason? I am thinking of ways to get an edge back to the project. The
GML-challenge is already a step into that direction; it is not about the
prettiest screensaver. To ask people to design something that is tying back
to what graffiti is, which is in a way a crime.
I think fixing the data capture is the right place to start, the next one could
be about making marks in the city. Like: the first person to recreate this
GML-tag on the roof of this building, that would be fun. The first person
that could put this ‘Hello World’ tag onto the Brooklyn bridge and get a
photo of it gets the prize. That would get us back to the question of how
we leave marks on the surface of the city.
When you capture data of an individual writer in a certain standard,
it ends up as typography?
FS

That’s another trend that happens when designers look at graffiti, and
I’ve fallen into this too sometimes, you want to be able to make fonts out of
it. People have done this actually; there’s a project in New York where they
met with pretty influential graffiti writers and asked them to write in boxes,
the whole alphabet, and I think there’s something interesting there.
The alphabet that you saw the robot write was drawn by TEMPT with the
EyeWriter and what he did was a little bit smarter than other attempts by
graffiti writers to make fonts. He intentionally picked a specific style, the
Cholo style, and the format is very tall, vertically oriented, angled. That
style is less about letter connections and pen-flow. What graffiti has developed into, and especially tags, is very much about how it is written and
the order of the letters. When TEMPT picked this style he made a smart
decision that a lot of people miss when you make a font, you miss all the
motions and the connections.
ER

What if a programmer could put this data in a font, and generate
alternating connections?
SV

ER That kind of stuff is interesting. It would help graffiti writers to design
tags maybe?
To get my feet wet, I designed a tag once, and it was so not-fun to write!
I was thinking about a tag that would look different and that would fit

235

Tying the story to data

into corners, I was interested in designing something that wasn’t curved;
that would fit the angles of the city, hard edges. So I had forgotten all
my research about drafting and writing. I think I stopped writing in part
because the tag I picked wasn’t fun o write. For a font to work like writing,
it is not just about possible connections between lines. You’d need another
level in the algorithm, the way the hand likes to move.
It would be a good algorithm to dream up. It was beautiful to see a
robot write TEMPT’s letters by the way.
FS

When TEMPT saw the robot writing for the first time, his reaction was
all about the order of how the letters were constructed. The order is I think
defined by the way he dropped the points in with the EyeWriter software.
When he was writing with his eyes, he ended up writing in the same way
as he would have written with his hands. When he saw the video with the
robot, it freaked him out because he was like: That’s how my hand moved
when I did that tag!
ER

236

Tying the story to data

The Graffiti Markup Field Recorder
challenge

An easily reproducible DIY device that can unobtrusively record graffiti motion data during a graffiti writer’s normal practice in the city. 30
Project Description and Design Requirements:



The GML Field Recorder Challenge is a DIY hardware and software solution for unobtrusively recording graffiti motion data during a graffiti writer’s
normal practice in the city. The winning project will be an easy to follow
instruction set that can be reproduced by graffiti writers and amateur technologists. The goal is to create a device that will document a night of graffiti
bombing into an easily retrievable series of Graffiti Markup Language (.gml)
files while not interfering with the normal process of writing graffiti. The
solution should be easy to produce, lightweight, cheap, secure, and require
little to no setup and calibration. The winning design solution will include
the following requirements listed below:
Material costs for the field device must not exceed 300

300 even felt expensive to me. How can this be a tool that is really
accessible? If it goes over a certain price point, it is not the kind of thing
that people can afford to make. It is a very small community, a lot of the
people that are going to have enough interest to build this are not going
to have a background in engineering, and are probably not even a part of
the maker scene that we know. The audience here might not be people
that are hanging out on Instructables. I wanted to make sure that the
price point meant that people could comfortably take a gamble to make
something for the first time. But I also did not want to make it so small
that the design would be impossible.

ER

30

GML-recorder challenge as published on:
http://www.graffitimarkuplanguage.com/challenges

237

.

Tying the story to data



Computers and equipment outside of the 300

can be used

for non-field activities (such as downloading and manipulating data captured in-field), but at the time of
capture a graffiti writer should have no more than 300
worth of equipment on him or herself.

I was trying to think of how the challenge could be gamed ... I did not
want to get into a situation where we were getting stressed out because some
smart hacker found a hole in the brief, and bought a next generation iPhone
that somehow just worked. I didn’t want to force people to buy expensive
equipment. This line was more about covering our own ass.
ER



The graffiti writer must be able to activate the recording function alone (i.e., without assistance from anyone else).
FS

Are you going to be out of work soon?

Thinking selfishly, I screw up on documentation a lot because I have
too many hats. When I’m going out doing this, I am carrying a laptop, a
calibration set up, I also have one video-camera on me that is just documenting, I have another one on a tripod, and I am usually screen capturing
the software as it processes the video-footage because it tells another story.
I screw up because I forget to hit stop or record. If the data-capture just
works, I can go have fun getting good video-footage.
ER

What if it had to be operated by more than one person? It is nice
how the documentation now turns the act of writing into a performancefor-one.
FS

If you record alone, the data becomes more interesting and mysterious,
right? I mean, no one else has seen it. Something captured very privately,
than gets potentially shared publicly and turned into things that are very
different. I also thought: you don’t want to be dependent on someone else.
It is a lot to ask, especially if you are doing something illegal.
ER

238

Tying the story to data



Any setup and/or calibration should be limited to 10
seconds or less.

This came out of me dealing with the current system. It feels wrong
that it takes ten to fifteen minutes to get it running. Graffiti is not meant
to be that way. This speaks to the problem of the documentation infringing on the writing process, which ideally wouldn’t happen. The longer
the set-up takes, the more it is going to influence the actual writing. It is
supposed to be a fly on the wall.
ER

FS
ER



Does it scale? Does a larger piece allow longer callibration -time?

That’s true. But I think this challenge is really about recording tags.

All hardware should be able to be easily concealed within
a coat with large pockets.

A hack to get around that would have been to design a jacket with ten
gallon pockets!
I put it there again, to make the device not be intrusive. A big part of graffiti
writing is about gaining entry and you limit where you can go depending on
how much equipment you have. How bulky it is, what walls you can get up,
what holes you can get through.
ER



The winning solution should be discrete and not draw
any added attention to the act of graffiti writing.
ER It’s part of the same issue, but this one also came out from me going
out and trying to capture with a system where it requires you to attach
a flashlight to a graffiti implement. I didn’t want anyone solving the
problem and then, Step one is: ‘Attach a police siren to a spraypaint can’

239

Tying the story to data



The resulting solution should be able to record at least
10 unique GML tags of approximately 10 seconds each in
length in one session without the need for connecting
to or using additional equipment.

I wasn’t thinking this was going to be an issue in terms of memorystorage, but maybe in terms of memory management. I did not want the
graffiti writer to behave as if he was on vacation with a camera that could take
only three photos. I wanted to make sure they were not making decisions
on what they were writing based and how much memory they had.
ER



All data recorded using the field recorder should be
saved in a secure and non-incriminating fashion.

(laughs) If I had to do that one again, I would have put that in Bonus
category actually. That’s a difficult question to ask. What does secure
mean? It seems a bit unfair, because it doesn’t fit in to the way graffiti is
currently documented. There’s not a lot of graffiti writers that currently
are shooting encrypted photos and videos, right?
But whatever bizarre format comes out from the sensor will help. I don’t
think that the NYPD will have time or make the effort to parse it. They’d
just have a file with a bunch of numbers. Time stamped GPS coordinates
would be more dangerous.
ER

FS

What would count as proof?

In most cases it is hard to convict someone on the basis of a photo
of a tag that you would tie to another tag. For good reasons, because if it
is a crew name for example, all of a sudden you are pinning one tag on a
person that could have been written by twenty people. This came up in
a trial in DC when an artist named BORF got arrested. He had written
his name everywhere, completely crushed DC and his trial was a big deal.
This issue came up and they argued that BORF was a collective, not an
individual. Who knows if that’s true, there were a lot of people around
him, but how do you really know?
ER

FS

GML could help balance the load?

You mean it would not be just the image of a tag but more like signing
at the bank?
ER

240

Tying the story to data

I mean that if you copy and distribute your data, the chance is small
that you can link it to an individual.
FS



The winning design will have some protection in the event
that the device falls into the wrong hands.

This again should probably have been a bonus item. Wouldn’t it be
awesome if you could go home and log in and flip a one to a zero and the
evidence goes up in smoke?
One graffiti writer friend told me: If the police comes, just smash the camera
as hard as you can! It’s a silly idea, but it shows that they are thinking
about it.
ER

FS
ER



Edible SD cards?

That would be a good idea!

Data should be able to be captured from both spray cans
and markers.
ER
FS

Yes.

Are you prepared for tools that do not exist yet?

That was kind of what I was thinking there. Markers are about direct
contact, spraypaint is in free space. If it works in those two situations, you
should theoretically be able to tie it to anything, even outside of graffiti. If
it was too much about spraypaint, it would be harder for someone to strap
it to a skateboard.
ER

241

Tying the story to data



System should be able to record writing on various surfaces and materials.

It is something you can easily forget about. When you are developing
something in the studio and it works well against a white wall, and than
when you go out in the city than you realize that brick is a really weird
surface. Or even writing on glass, or on metal or on other reflecting
surfaces that could screw up your reading. It is there as a reminder for
people that are not thinking about graffiti that much. The street and the
studio are so different.
ER



Data should be captured at 30 points per second minimum.

I was assuming that lots of people were going to use cameras, and
I wanted to make sure they were taking enough data points. With other
capturing methods it is probably not such a problem. Even at 30 points per
seconds you can start to see the facets if you zoom in, so anything less is not
ideal.
ER



The recording system should not interfere with the writer s
movements in anyway (including writing, running and climbing).

So this is where Muharrem is going to run into trouble. His solution
interferes. Not that much if you are just working in front of your body
space. But the way most writers write is that they are shuffling their feet
a lot, moving down the wall. Should it have said: Graffiti writer should
retain access to feet functionality? This point should be at the top almost.
ER

To me it feels strange, your emphasis on the tool blending into the
background. You could also see Muharrem’s solution as an enhancing device,
turning the writer into a tapdancer?
FS

I want to have on record: I love his solution! There’s a lot in his
design that is ‘making us more aware’ of what’s happening in the creation
of a tag. One thing that he is doing that is not in the specs, is that he is
ER

242

Tying the story to data

logging strokes, like up and down. When you watch him using it, you
can see a little light going from red to green when the fingers goes on
and off the spraypaint can. When you watch graffiti, it is too small of a
movement to even notice but when you are seeing that, it adds another
level of understanding of how they are writing.


All motion data should be saved using the current GML
standard 31 .
FS



Obvious.

All aspects of the winning design should be able to be
reproduced by graffiti writers and amateur technologists.

It wouldn’t be exciting if only ten people can make this thing. This
tool should not be just for people that can make NASA qualified soldering
connections. Ideally it should not have any soldering. I always thought of
a soldering iron like a huge barrier point. I’m all for duct-taped electrical
connections.

ER

There’s nothing about weather-resistant in the challenge. You’re not
thinking about rain, are you?
FS

A lot of paint stops working in rain too.
I think what you get from this brief though is that the whole impetus for
this project is about me trying to steer the ship that clearly wants to go
into another direction, back to my interest in what graffiti is rather than
anything that people might find aesthetically pleasing. It is not about
‘graffiti influenced visuals’.
ER

31

http://graffitimarkuplanguage.com/spec

243

Tying the story to data



All software must be released Open Source.

All hard-

ware must include clear DIY instructions/tutorials.

All

media must be released under an Open Content licence that
promotes collaboration (such as a Free Art License or
Creative Commons ShareAlike License).

I didn’t want it to be too specific, but there had to be some effort into
making it open.
ER



The recording must be an unobtrusive process, allowing the graffiti writer to concentrate solely on the act
of writing (not on recording).

The act of recording should

not interfere with the act of graffiti writing.

I’ve been through situations where the process gets so confusing that
you can’t keep your head straight and juggle all the variables. Your eyes
and ears are supposed to tell you about who’s coming around the corner.
Is there traffic coming or a train? There are so many other things you
need to pay attention to rather than: Is this button on?
The whole project is about getting good data. As soon as you force people
to think too much about the capture process, I think it influences when
and how they are writing.
ER

Bonus, but not required:


Inclusion of date, time and location saved in the .gml
file.

Yes. Security-wise that is questionable, but the nerd in me would just
love it. You could get really interesting data about a whole night of writing.
You could see a bigger story than just that of a single tag. How long did it
take to gain entry? How long were they hiding in the bushes? These things
get back to graffiti as a performance art rather than a form of visual art.
ER

244

Tying the story to data

Paris, November 2011
Last time we had contact we discussed how to invite Muharrem to
Brussels 32 . But now on the day of the deadline, it seems there are new
developments?
FS

ER I think in terms of the actual challenge, the main update is that since
we extended the deadline and made another call, I got an e-mail right on
the deadline today from Joshua Noble 33 with a very solid and pretty smart
proposal that seems to solve (maybe unfortunately for Muharrem) a bit
more of the design spec. It does it for cheaper and does it in a way that I
think is going to be easier to make also.
His design solution is using an optical mouse and he changed the sensors
so it has a stronger LED. He uses a modified lens on top of a plastic lens
that comes on top of a mouse, so that it can look at a surface that is a set
distance away. It has another sensor that looks at pitch, tilt and orientation,
but he is using that only to orient, the actual data gets recorded through the
mouse. It can get very high resolution, he is looking at up to a millimeter I
guess.
FS

Muharrem’s solution seems less precise?

I think he gets away with more because his solution is only for spraypaint
and once you are writing on that scale, even if you are off a few centimeters,
it might not ruin the data. If you look at the data he is getting, it actually
looks very good. I don’t think he has any numbers on the actual resolution
he is getting but if you were using his system with a pen, I think it would
be a different case. I like a lot of his solution too, it is an interesting hack.
It is funny that two of the candidates for the prize are both mouse hacks.
One is hacking a mechanical mouse and the other an optical mouse.
ER

FS
JH

It goes from drawing on a screen, to drawing on a wall?
And back again!

Yes. When I first was working on graffiti related software, the whole
reason I was building Graffiti Analysis as a capture application was beER

32

33

By early October 2011 no winning design-solution had been entered, besides a proposal from
Muharem Yildirim that came more than halfway. We decided to use the prize money to fly
Muharrem from Phoenix (US) to Brussels (BE) and document his project in a worksession as
part of the Verbindingen/Jonctions 13 meetingdays. http://www.vj13.constantvzw.org
Joshua Noble http://www.thefactoryfactory.com/gmlchallenge/

245

Tying the story to data

cause I did not want to hand graffiti writers a mouse (laughter). I had
done all this research into graffiti and started to be embedded in the
community and I knew enough about the community that if you were
going to ask them to take part in something that was already weird, you
could not give them a mouse and expect any respect on the other end
of that conversation. They respect their tools, so the reason I was using camera-input was because I wanted to have a flexible system where
they could bring in anything and I could attach a device to it. Now I am
coming back to mice finally.
Now the deadline has passed, do you think the passage from wishlist to
contest worked out?
FS

I think it was a good experiment, I am not sure how clever it was. To
take a piece of culture that a lot of people don’t even look at, or look at
it and think it is trash, to invest all this time and research and software
expertise into it makes people think about the graffiti practice and what
it actually is. The cash prize does something similar. It attaches weight
to something that most people don’t even care about. Even having the
name of an organization like Constant attached to it is showing that I am
really serious about this. In that sense it is different than a wishlist.
I just read the Linus Torvalds 34 biography, and I liked his idea that ‘fun’
is part of innovation, right? In a programming sense, it is scratching a
personal itch. The attachment of a prize is more to underline the fun
aspect than anything else.
ER

I am still puzzled about GML and how it is at the one hand stimulating
collaboration and sharing, and than it comes back to the proud individual
that wants to show off. It is kind of funny actually that now two people are
winning the prize.
FS

ER

I understand what you mean.

Also in F/LOSS, under the flag of ‘Open’ and ‘Free’ there is a lot of
competition. Do you feel that kind of tension in your work?
FS

Even ‘Open’ and ‘Free’ are in competition!
In a project like White-Glove Tracking for example, the most popular
video I had not made and it did not have my name on it but personally I
ER

34

Torvalds, Linus; David Diamond (2001). Just For Fun: The Story of an Accidental
Revolutionary. New York, New York, United States: HarperCollins.

246

Tying the story to data

still felt a part of it. I think when you are working in open systems, you
take pride when a project has wings. It is maybe even a selfish act. It is
the story of me receiving some art-finding and realizing that I am not the
best toolmaker for the job. Who ever manages to win the prize gets all
the glory, but I’m still going to feel awesome about it.

I have been reading the interview that Kyle McDonald did with Anton
Marini 35 and at some point he talks about being OK with sharing code and
libraries, but when it is too much of a personal style, then it is hard to share.
FS

Yes, I thought that was an interesting point. I’ve been in similar conversations on listservs with artists in the OpenFrameworks, Processing
and visual programming communities. What are the open pieces? It
makes sense to share libraries, but if I make a print from a piece of code,
do I then have to share the exact source and app for how that exact print
was made? What does it mean when I am investing money in a print, and
it is a limited series but I’m sharing the code? The art world is still based
on scarcity and we’re interested in computers that are copy-machines.
I see both sides of the argument and I am still trying to see how I fit
into it. It gets trickier when you are asked to release a piece rather than
a tool. If you are an Open Source artist and you make a toolset, that is
easier to share because people use that to make their own things. But
then an artist gets asked: how come I can’t get the file of that print? I
think that is a really hard question.
ER

FS

But isn’t the tool often the piece, and vice versa?

I agree. And I haven’t solved that question yet. Lately I’ve been a lot
less excited about running workshops for example. A lot of the people
that want to take part in the workshops are actually the opposition. Often
they own a club and they want to install a cool light-show or they are into
viral marketing. I never know which way to go with that. It depends on
what side of the curve of frustration I am on at that moment.
ER

Earlier you brought up the contrast between people that were more
visually invested and others that are more interested in the performance
aspect. I wanted to hear a bit more about the continuum in the culture and
how GML fits into that?
JH

35

Anton Marini: Some personal projects of mine, for example specific effects and ‘looks’ that I have a
personal attachment to, I don’t release
https://github.com/kylemcdonald/SharingInterviews/blob/master/antonmarini.markdown

247

Tying the story to data

My focus has been on tags, this one portion of graffiti. I do think
there could be cool uses for more involved pieces. It would be great if
someone else would come in and do that, because it is a part of graffiti that
I haven’t studied that much. I would not even be able to write a specssheet for it; it requires a lot of different things when you paint these
super-involved murals, when you have an hour or more time on your
hands a lot more things come into play. Color, nozzles, nozzle changes
and so on.
ER

JH

Z-axis becomes important?

Yes, and your distance from the wall, a lot of other things my brain
isn’t wrestling with. I think tags are always fundamental, even if they are
painting murals that take three days to paint, somewhere in their graffiti
education they start with the tags. You’re still going to be judged by the
community based on how you sign your name on the blackbook.
Graffiti is funny because it is almost conservative in terms of how a successful graffiti writer is viewed and it is reflected in how graffiti is in
some way similar in the world. In some way it is a let down, to travel
from Brooklyn to Paris to Brussels and it looks all the same but I think it
stems from the fact that the community is so tight-knit. But at the end
of the day it comes back to the tag always.
In terms of the performance, in a tag the relationship between form and
function is really tight. The way your hand moves and how the tag actually looks on the wall is dictated by the gesture you are making. A piece
where you have three hours, that tight synchronization isn’t there. With
a tag, every letter looks the way it does because that’s how it needs to be
drawn, because it needs to be connected to this other letter. There’s a
lot of respect for writers that do oneliners, and even if your tag has more
than one line, a good graffiti writer has often a one line version. If you
don’t have to pick up the pen it is a really economical stroke.
ER

JH

It is almost like hacking the limitations of gesture.

It is a very specific design requirement. How to write a name that is
interesting to think about and to look at, you have to do it in 5 seconds,
you have to do it in one line, you have to do it on each type of surface.
On top of that, you have to do it a million times, for twenty years.
ER

In Seattle they call a piece that stays up for a longer time a ‘burner’. I
was connecting that to an archival practice of ephemera. It is a self-agreed
JH

248

Tying the story to data

upon archival process, and it means that the piece will not be touched, even
for years.

ER Graffiti has an interesting relationship to archiving. On the one hand,
many graffiti writers think: Now that tag’s done, but I’ve got another
million of them. While others do not want people painting over them,
the city or other graffiti writers. Also if a tag has been up there for a few
years, it acquires more reverence and it is even worse when it is painted
over.
But I think that GML is different, it is really more similar to a photo of
the tag. It is not trying to be the actual thing.
FS

Once a tag is saved in GML, what can be done with the data?

I am myself reluctant to take any of these tags that I’ve collected and
do anything with it at all without talking closely to whoever’s tag it is,
because it is such an intimate thing. In that sense it is strange to have
an open data repository and to be so reluctant to use it in a way that is
looking at anyone too specifically.
The sculpture I’ve been working on is an average from a workshop; sixteen different graffiti writers merged into one. I don’t want to take advantage of any one writer. But this has nothing to do with the licence,
it is totally a different topic. If someone uploads to the 000000book site,
legally anyone should be able to do anything that they can do under the
Creative Commons licence that’s on the site but I think socially within
the community, it is a huge thing.
ER

There must be some social limits to referentiality. Like beat jacking for
DJs or biting rhymes for MCs, there must be a moment where you are not
just homaging, but stealing a style.
JH

I’ve seen cases where both parties have been happy, like when Yamaguchi
Takahiro used some GML data from KATSU and piped it into Google
Maps, so he was showing these big KATSU tags all over the earth which
was a nice web-based implementation. I think he was doing what a graffiti writer does naturally: Get out there and make the tag bigger but in
different ways. He is not taking KATSU-data from the database without
shining light back on him.
ER

GML seems very inspired by the practice of Free Software, but at the
same time it reiterates the conventional hierarchies of who are supposed to
FS

249

Tying the story to data

use what ... in which way ... from who. For me the excitement with open
licences is that you can do things without asking permission. So, usage
can develop even if it is not already prescribed by the culture. How would
someone like me, pretty far removed from graffiti culture ever know what I
am entitled to do?

I have my reasons for which I would and would not use certain pieces
of data in certain contexts, but I like the fact that it is open for people
that might use it for other things, even if I would not push some of those
boundaries myself.
ER

Even when I am sometimes disappointed by the actual closedness of
F/LOSS, at least in theory through its licensing and refusal to limit who is
entitled and who’s not, it is a liberating force. It seems GML is only half
liberating?
FS

I agree. I think the lack of that is related to the data. The looseness of
its licence makes it less of an invitation in a sense. If the people that put
data up there would sit down and really talk about what this means, when
they would really walk through all the implications of what it means to
public domain a piece, that would be great. I would love that. Then you
could use it without having to worry about all the morality issues and
people’s feelings. It would be more free.
I think it would be good to do a workshop with graffiti writers where
beyond capturing data, you reserve an hour after the workshop to talk to
everybody about what it would mean to add an open licence. I’ve done
workshops with graffiti writers and I talked to everyone: Look, I am
going to upload this tag up to this place where everyone can download them
after the workshop, cool? And they go cool. But still, even then, do I really
feel comfortable that they understand what they’ve gotten into? Even if
someone has chosen a ShareAlike licence, I would be nervous I think.
Maybe I am putting too much weight on it. People outside Free Software
are already used to attaching Creative Commons licences to their videos.
Maybe I am too close to graffiti. I still hold the tag as primal!
ER

It is interesting to be worried about copyright on something that is
illegal, things you can not publicly claim ownership of.
JH

Would you agree that standards are a normalizing practice, that in a
way GML is part of a legalizing process?
FS

250

Tying the story to data

For that to happen, a larger community would have to get involved. It
would need to be Gesture Markup Language, and a community other than
graffiti writers would need to get involved.
ER

FS
ER

Would you be interested in legalizing graffiti?
No. That’s why I stopped doing projections.

Not legal forms of graffiti, but more like the vision of KRS-One of
the Hip Hop city, 36 where graffiti would obviously be legal. Does that
fundamentally change the nature of graffiti?
JH

To me it is just not graffiti anymore. It is just painting. It changes what
it is. For me, its power stems from it being illegal. The motion happens
because it is illegal.
ER

In a sense, but there is also the calligraphic aspect of it. In Brooklyn,
a lot of the building owners say: yeah, throw it up and those are some
of the craziest pieces I know of, not from a tag-standpoint, but more as
complex graffiti visuals.
JH

I am always for de-criminalization. I don’t think anyone should go to
jail over a piece of paint that you could cover over in 5 seconds. And that
KRS-One city you mentioned would be cool to see.
ER

It is his Temple of Hip Hop, the idea to build a city of Hip Hop
where the entire culture can be there without any external repression.
It’s an utopian ideal obviously.
JH

Of course I would like to see that. If nothing else, you would totally
level the playing field between us and the advertisers. The only ones that
would get up messages in the city would be the ones with more time on
their hands.
ER

At the risk of stretching coherency, Hip Hop and Free Software
are both global insurgent subcultures that have emerged from being kind
of thrown away as fads and then become objects of pondering in multinational boardrooms. So I was hoping to open you up to riff on that:
zooming out, GML is a handshake point between these two cultures, but
GML is a specific thing within this larger world of F/LOSS and graffiti
JH

36

KRS-One Master Teacher. AN INTRODUCTION TO HIP HOP .
http://www.krs-one.com/#!temple-of-hip-hop/c177q

251

Tying the story to data

in the larger world of hiphop. What other types of contact points might
there be? Do you see any similarities and differences?

For me, even beyond technology and beyond graffiti it all boils down to
this idea of the hack that is really a phenomenon that has been going on
forever. It’s taking this system that has some sort of rigidity and repeating
elements and flipping it into doing something else. I see this in Hip Hop,
of course. The whole idea of sampling, the whole idea of turning a playback
device into a musical instrument, the idea of touching the record: all of
these things are hacks. We could go into a million examples of how graffiti
is like hacker culture.
In terms of that handshake moment between the two communities, I think
that is about realizing that its not about the code and in some sense its not
about the spraypaint. There’s this empowering idea of individual small actors
assuming control over systems that are bigger than themselves. To me, that’s
the connection point, whether its Hip Hop or rap or programming.
The similarities are there. I think there are huge differences in those communities too. One of them is this idea of the hustler from Hip Hop: the
idea of hustling doesn’t have anything to do with the economy of giftgiving. The idea that Jay-Z has popularized in Hip Hop and that rap music
and graffiti have at their core has to do with work ethic, but there’s also a
kind of braggadocio about making it yourself and attaining value yourself
and it definitely comes back to making money in the end. The idea of being
‘self-made’ in a way is empowering but I think that in the Open Source
movement or the Free Software movement the idea of hustling does not apply. It’s not that people don’t hustle on a day to day basis. You disagree with
me?
ER

It’s interesting because the more you were talking, the more I was
not sure of whether you were speaking about Hip Hop or Free Software
or maybe even more specifically the Open Source kind of ideological development. You have people like David Hannemeier Hansson who developed Ruby on Rails and basically co-opted an entire programming
language to the point where you can’t mention Ruby without people
thinking of his framework. He’s a hustler du jour: this guy’s been in
Linux Journal in a fold-out spread of him posing with a Lamborghini or
something. Talk about braggadocio! You get into certain levels or certain
dynamics within the community where its really like pissing contests.
JH

252

Tying the story to data

I like that, I think there’s something there. At the instigation of the
Open Source Initiative, though: like Linus ‘pre-stock option’, sitting in his
bedroom not seeing the sun for a year and hacking and nerding out. To me
they are so different, the idea of making this thing just for fun with a kind
of optimistic view on collaboration and sharing. I know it can turn into
money, I know it can turn into fame, I know it can turn into Lamborghinis
but I feel like where its coming from is different.
ER

I agree, that’s clearly a distinction between the two. They are not
coming from the same thing. But for me its also interesting to think
about it in terms that these are both sort of movements that have at times
been given liberational trappings, people have assigned liberatory powers
to these movements. Statistically the GPL is considerably more popular
than the Open Source licences, but I don’t know if you sat everybody
down and took a poll which side they would land on, whether they were
more about making money than they were about sharing. Are people
writing blogposts because they really want to share their ideas or because
they want to show how much cooler they are?
JH

You’re totally right and I think people in this scene are always looking
for examples of people making money, succeeding, good things coming to
people for reasons that aren’t just selflessness. People that are into Open
Source usually love to be able to point to those things, that this isn’t some
purely altruistic thing.
ER

Maybe you could take some of the hustle and turn it into something
in the Free Software world, mix and match.
JH

ER I think this line of inquiry is an interesting one that could be the
subject of a documentary or something. These communities that seem very
different until you start finding things that at their core really really similar.

It would be so interesting to have a cribs moment with some gangsta
or rapper who came from that, and he’s sort of showing off his stuff and
he has this machismo about him. Not necessarily directly mysognistic
but a macho kind of character and then take a nerd and have them do the
same.
JH

FS

Would they really be so different?

253

Tying the story to data

Obviously some rappers and some nerds, I mean that’s one of the
beauties – I mean its a global movement, you can’t help but have diversity
– but if we’re just speaking in generalizations?
JH

FS

There’s a lot of showing off in F/LOSS too.

Yeah, and there’s a lot of chauvinism. And when you said that selfmade thing, that’s the Free Software idea number one.
JH
ER

I think that part is a direct connection.

And they’re coming from two completely different strata, from a
class-based analysis which is absent from a lot of discussion. Even on
that level, how to integrate them to me is a political question to some
degree.
JH

ER
FS
ER
FS

Right.

Will any features of GML ever be deprecated?

Breaking currently existing software? I hope not.
Basically I’m asking for your long-term vision?

When the spec was being made of course it wasn’t just me, it was a
group of people debating these things and of course nobody wants things
to break. The idea was that we tried to get in as many things as we could
think of and have the base stay kind of what it was with the idea that you
could add more stuff into it. It’s easy enough to do, of course its not a
super-rigid standard. If you look at what the base .gml file is, the minimum
requirements for GML to compile, its so so stripped down. As long as it
just remains time/x-y-z, I don’t think that’s going to change, no.
But I’m also hoping that I’m not gonna be the main GML developer. I’m
already not, there’s already people doing way more stuff with it than I am.
ER

FS

How does it work when someone proposes a feature?

ER They just e-mail me (laughs). But right now there hasn’t been a ton
of that because it’s such a simple thing, once you start cramming too much
into it it starts feeling wrong. But all its gonna take is for someone to make
a new app that needs something else and then there will be a reason to
change it but I think the change will always be adding, not removing.

254

The following text is a transcription of a talk by and conversation with Denis Jacquerye in the context of the Libre
Graphics Research Unit in 2012. We invited him in the
context of a session called Co-position where we tried to
re-imagine layout from scratch. The text-encoding standard Unicode and moreover Denis’ precise understanding of the many cultural and political path-dependencies
involved in the making of it, felt like an obvious place
to start. Denis Jacquerye is involved in language technology, software localization and font engineering. He’s
been the co-lead of the DéjàVu Font project and works
with the African Network for Localization (ANLoc) to remove language limitations that exist in today’s technology.
Denis currently lives in London.This text is also available
in Considering your tools. 1 A shorter version has been published in Libre Graphics Magazine 2.1.
This presentation is about the struggle of some people to use typography
in their languages, especially with digital type because there is quite a complex set of elements that make this universe of digital type. One of the
basic things people do when they want to use their languages, they end up
with these type of problems down here, where some characters are shown,
some aren’t, sometimes they don’t match within the font. Because one font
has one of the character they need and then another one doesn’t. Like
for example when a font has the capital letter but not the corresponding
lowercase letter. Users don’t really know how to deal with that, they just
try different fonts and when they’re more courageous, they go online and
find how to complain about those to developers – I mean font designers or
engineers. And those people try to solve those problems as well as they
can. But sometimes it’s pretty hard to find out how to solve them. Adding
missing characters is pretty easy but sometimes you also have language re-

1

Considering your tools: a reader for designers and developers http://reader.lgru.net

261

quirements that are very complex. Like here for example, in Polish, you
have the ogonek, which is like a little tail that shows that a vowel is nasalized. Most fonts actually have that character, but for some languages, people
are used to have that little tail centred which is quite rare to see in a font.
So when font designers face that issue, they have to make a choice rather
they want to go with one tradition or another, and if they want to go one
way they’re scattered to those people. Also you have problems of spacing
things differently, like a stacking of different accents – called diacritics or
diacritical marks. Stacking this high up often ends up on the line above, so
you have to find a solution to make it less heavy on a line, and then in some
languages, instead of stacking them, they end up putting them side by side,
which is yet another point where you have to make a choice.
But basically, all these things are based on how type is represented on computers. You used to have simple encodings like ASCII, the basic Western
Latin alphabet where each character was represented by bytes. The character could be displayed with different fonts, with different styles, they could
not meet the requirements of different people. And then they made different encodings because they were a lot of different requirements and it’s
technically impossible to fit them all in ASCII.
Often they would start with ASCII and then add the specific requirements
but soon they ended up having a lot of different standards because of all the
different needs. So one single byte of representation would have different
meanings and each of these meanings could be displayed differently in fonts.
But old webpages are often using old encodings. If your browser is not
using the right encoding you would have jibbish displayed because of this
chaos of encodings. So in the late eighties, they started thinking about
those problems and in the nineties they started working on Unicode: several
companies got together and worked on one single unifying standard that
would be compatible with all the pre-used standards or the new coming
ones.
Unicode is pretty well defined, you have a universal code point to represent to identify a character, and then that character can be displayed with
different glyphs depending on the font or the style selected. With that
framework, when you need to have the proper character displayed, you have
to go the code point in a font editor, change the shape of the character and
it can be displayed properly. Then sometimes there’s just no code point for
the character you need because it hasn’t been added, it wasn’t in any existing
262

standard or nobody has ever needed it before or people who needed it just
used old printers and metal type.
So in this case, you have to start to deal with the Unicode organization itself.
They have a few ways to communicate like the mailing list, the public, and
recently they also opened a forum where you can ask questions about the
characters you need as you might just not find them.
In most operating systems, you have a character map application where you
can access all the characters, either all the characters that exist in Unicode or
the ones available in the font you’re using. And it’s quite hard to find what
you need, as it’s most of the time organized with a very restrictive set of
rules. Characters are just ordered in the way they’re ordered within Unicode
using their code point order: for example, capital A is 41, and then B is 42,
etc. The further you go in the alphabet the further you go in the Unicode
blocks and tables, and there is a lot of different writing systems ... Moreover
because Unicode is sort of expanding organically – work is done on one
script, and then on another, then coming back to previous scripts to add
things – things are not really in a logical or practical order. Basic Latin is all
the way up there, and more far, you have Latin Extended A, (Conditional)
Extended Latin, Latin Extended B, C and D. Those are actually quite far
apart within Unicode, and each of them can have a different setup: for
example, here you have a capital letter that is just alone, and here you have
a capital letter and a lowercase letter. So when you know the character you
want to use, sometimes you would find the uppercase letter but you’d have
to keep looking for the corresponding lowercase.
Basically when you have a character that you can’t find, people from the
mailing list or the forum can tell you if it would be relevant to include it
in Unicode or not. And if you’re very motivated, you can try to meet the
inclusion criterias. But for a proper inclusion, there has to be a formal
proposal using their template with questions to answer, you also have to
provide proof that the characters you want to add are actually used or how
they would be used.

263

The criterias are quite complicated because you have to make sure that this is
not a glyphic variant (the same character but represented differently). Then
you also have to prove the character doesn’t already exist because sometimes
you just don’t know it’s a variant of another one; sometimes they just want
to make it easier and claim it’s a variant of another one even though you
don’t agree. For example, making sure it’s not just a ligature as sometimes
ligatures are used as a single character, sometimes they exist for aesthetic
reasons. Eventually you have to provide an actual font with the character so
that they can use it in their documentation.
How long does it take usually?

It depends as sometimes they accept it right away if you explain your request
properly and provide enough proof, but they often ask for revisions to the
proposals and then it can be rejected because it doesn’t meet the criterias.
Actually those criterias have changed a bit in the past. They started with
Basic Latin and then added special characters which were used: here for example is the international phonetic alphabet but also all the accented ones ...
As they were used in other encodings and that Unicode initially wanted to
be compatible with everything that already exists, they added them. Then
they figured they already had all those accented characters from other encodings so they’re also going to add all the ones they know are used even
though they were not encoded yet. They ended up with different names because they had different policies at the beginning instead of having the same
policy as now. They added here a bunch of Latin letters with marks that
were used for example in transcription. So if you’re transcribing Sanskrit for
example, you would use some of the characters here. Then at some point
they realized that this list of accented characters would get huge, and that
there must be a smarter way to do this. Therefore they figured you could
actually use just parts of those characters as they can be broken apart: a
base letter and marks you add to it. You may have a single character that
can be decomposed canonically between the letter B and a colon dot above,
and you have the character for the dot above in the block of the diacritical
marks. You have access to all the diacritical marks they thought were useful
at some point. At that point, when they realized they would end up having
thousands of accented characters they figured with this way where we can
have just any possibility, so from now on, they’re just going to say if you
want to have an accented character that hasn’t been encoded already, just
264

use the parts that can represent it. Then in 1996, some people for Yoruba,
a spoken language in Nigeria, made a proposal to add the characters with
diacritics they needed and Unicode just rejected the proposal as they could
compose those characters by combining existing parts.
Weren’t the elements they needed already in the toolbox?

Yes, the encoding parts are there, meaning it can be represented with
Unicode but the software didn’t handle them properly so it made more
sense to the Yoruba speakers to have it encoded it in Unicode.

So you could type, but you’d need to type two characters of course?

Yes, the way you type things is a big problem. Because most keyboards
are based on old encodings where you have accented characters as single
characters, so when you want to do a sequence of characters, you actually
have to type more, or you’d have to have a special keyboard layout allowing
you to have one key mapped to several characters. So that’s technically
feasible but it’s a slow process to have all the possibilities. You might have
one whic is very common so developers end up adding it to the keyboard
layouts or whatever applications they’re using, but not when other people
have different needs.
There is a lot of documentation within Unicode, but it’s quite hard to find
what you want when you’re just starting, and it’s quite technical. Most of it
is actually in a book they publish at every new version. This book has a few
chapters that describe how Unicode works and how characters should work
together, what properties they have. And all the differences between scripts
are relevant. They also have special cases trying to cater to those needs that
weren’t met or the proposals that were rejected. They have a few examples
in the Unicode book: in some transcription systems they have this sequence
of characters or ligature; a t and a s with a ligature tie and then a dot above.
So the ligature tie means that t and s are pronounced together and the dot
above is err ... has a different meaning (laughs). But it has a meaning! But
because of the way characters work in Unicode, applications actually reorder
it whatever you type in, it’s reordered so that the ligature tie ends up being
moved after the dot. So you always have this representation because you
have the t, there should be the dot, and then there should be the ligature tie
and then the s. So the t goes first, the dot goes above the t, the ligature tie
goes above everything and then the s just goes next to the t. The way they
265

explain how to do this is supposed to do the t, the ligature tie, and then a
special diacritical mark that prevents any kind of reordering, then you can
add the dot and then you can do the s. So this kind of use is great as you
have a solution, it’s just super hard because you have to type five characters
instead of ... well ... four (laughs). But still, most of the libraries that are
rendering fonts don’t handle it properly and then even most fonts don’t
plan for it. So even if the fonts did anyway the libraries wouldn’t handle it
properly. Then there are other things that Unicode does: because of that
separation between accents and characters and then the composition, you
can actually normalize how things are ordered. This sequence of characters
can be reordered into the pre-composed one with a circumflex or whatever;
you have combining marks in the normalized order. All these things have
to be handled in the libraries, in the application or in the fonts.
The documentation of Unicode itself is not prescriptive, meaning that the
shape of the glyphs are not set in stone. So you can still have room to
have the style you want, the style your target users want. For example
if we have different glyphs: Unicode has just one shape and it’s the font
designer’s choice to have different ones. Unicode is not about glyphs, it’s
really about how information is represented, how it’s displayed. Or you have
two characters displayed as a ligature: it is actually encoded as one character
because of previous encodings. But if ever it would be a new case, Unicode
wouldn’t stake the ligature as a single character.

266

So all this information is really in a corner there. It’s quite rare to find fonts
that actually use this information to provide to the needs of the people who
need specific features. One of the way to implement all those features is
with TrueType OpenType and there are also some alternatives like Graphite
which is a subset of a TrueType OpenType font. But then, you need your
applications to be able to handle Graphite. So eventually the real unique
standard is TrueType Opentype. It’s pretty well documented and very technical because it allows to do many things for many different writing systems.
But it’s slow to update so if there’s a mistake in the actual specifications of
OpenType, it takes a while before they correct it and before that correction shows up in your application. It’s quite flexible and one of the big
issue it that it has its own language code system, meaning that some identified languages just can’t be identified in OpenType. One of the features in
OpenType is managing language environment. If I’m using Polish, I’d want
this shape; if I’m using Navajo, I’d want this shape. That’s very cool because you can make just one font that’s used by Polish speakers and Navajo
speakers without them worrying about changing fonts as long as they specify the language they’re using. But you can’t use this feature for languages
which aren’t in the OpenType specifications as they have their own way of
describing languages than Unicode. It’s really frustrating because, you can
find all the characters in Unicode, not organized in a practical way: you have
to look all around the tables to find the characters that may be used by one
language, and then you have to look around for how to actually use them.
It is a real lack of awareness within the font designer community. Because
even when they might add all the characters you need, they might just not
add the positioning, so for example you have a ... when you combine with a
circumflex, it doesn’t position well because most of the font designers still
work with the old encoding mindset when you have one character for one
accentuated letter. Sometimes they just think that following the Unicode
blocks is good enough. But then you have problems where, as you can see
in the Basic Latin charts at the beginning, the capital is in one block and
its lowercase in a different block. And then they just work on one block,
they just don’t do the other one because they don’t think it’s necessary, but
yet, two blocks of the same letter are there, so it would make sense to have
both. It’s hard because there’s very few connections between the Unicode
world, people working on OpenType libraries, font designers and the actual
needs of the users.
267

At the beginning of the presentation you went for the code point of the characters,
all your characters are subtitled by their code points; it’s kind of the beauty of
Unicode to name everything, every character.
Those names are actually quite long. One funny thing about this. Unicode
has the policy of not changing the names of the characters, so they have an
errata where they realized that oh, we shouldn’t have named this that, so here’s
the actual name that makes sense, and the real name is wrong.

Pierre refers to the fact that in the character mappings that each of the glyphs
also has a description. And those are sometimes so abstract and poetic that
this was a start of a work from OSP, the Dingbats Liberation Fest, to try
to re-imagine what shapes would belong to those descriptions. So ‘combining
dot above’ that’s the textual description of the code point. But of course there
are thousands of them so they come up with the most fantastic gymnastics ...
So when people come in a project like DéjàVu, they have to understand
all that to start contributing. How does this training, teaching, learning
process takes place?

Usually most people are interested in what they know. They have a specific
need and they realize they can add it to DéjàVu, so they learn how to play
with FontForge. After a while, what they’ve done is good and we can use
it. Some people end up adding glyphs they’re not familiar with. For example we had Ben doing Arabic: it was mostly just drawing and then asking
for feedback on the mailing list; then we got some feedback, we changed
some things, eventually released it, getting more feedback (laughs) because
more people complained ... So it’s a lot of just drawing what you can from
resources you can find. It’s often based on other typefaces therefore sometimes you’re just copying mistakes from other typefaces ... So eventually it’s
just the feedback from the users that’s really helpful because you know that
people are using it, trying it, and then you know how to make it better.

268

(Type) designer Pedro Amado is amongst many other things
initiator of TypeForge 1 , a website dedicated to the development
of ‘collaborative type’ with Open Source tools. While working
as design technician at FBAUP 2 , he is about to finish a MA
with a paper on collaborative methods for the creation of art
and design projects. When I e-mailed him in 2006 about open
font design and how he sees that developing, he responded with
a list of useful links, but also with:
Developing design teaching based on
Open Source is one of my goals, because
I think that is the future of education.

This text is based on the conversation about design, teaching
and software that followed.

You told me you are employed as ‘design technician’ ... what does that
mean?

It means that I provide assistance to teachers and students in the Design
Department. I implemented scanning/printing facilities for example, and
currently I develop and give workshops on Digital Technologies – software
is a BIG issue for me right now! Linux and Open Source software are slowly
entering the design spaces of our school. For me it has been a ‘battle’ to
find space for these tools. I mean – we could migrate completely to OSS
tools, but it’s a slow progress. Mainly because people (students) need (and
want) to be trained in the same commercial applications as the ones they
will encounter in their professional life.
How did Linux enter the design lab? How did that start?

It started with a personal curiosity, but also for economical reasons. Our
school can’t afford to acquire all the software licenses we’d like. For example, we can’t justify to pay approx. 100 x 10 licenses, just to implement
1
2

http://www.typeforge.net/
http://www.fba.up.pt/

275

the educational version of Fontlab on some of our computers; especially because this package is only used by a part of our second year design students.
You can image what the total budget will be with all the other needs ... I
personally believe that we can find everything we need on the web. It’s a
matter of searching long enough! So this is how I was very happy to find
Fontforge. An Open Source tool that is solid enough to use in education
and can produce (as far as I have been able to test) almost professional results in font development. At first I couldn’t grasp how to use it under X 3
on Windows, so one day I set out to try and do it on Linux ... and one thing
lead to another ...

What got you into using OSS? Was it all one thing leading to another?

Uau ... can’t remember ... I believe it had to do with my first experiences
online; I don’t think I knew the concept before 2000. I mean I’ve started
using the web (IRC and basic browsing) in 1999, but I think it had to do
with the search of newer and better tools ...
I think I also started to get into it around that time. But I think I was
more interested in copyleft though, than in software.

Oh ... (blush) not me ... I got into it definitely for the ‘free beer’ aspect!
By 2004 I started using DTP applications on Linux (still in my own time)
and began to think that these tools could be used in an educational context,
if not professionally. In the beginning of 2006 I presented a study to the
coordinator of the Design Department at FBAUP, in which I proposed to
start implementing Open Source tools as an alternative to the tools we were
missing. Blender for 3D animation, FontForge for type design, Processing
for interactive/graphic programming and others as a complement to proprietary packages: GIMP, Scribus and Inkscape to name the most important
ones. I ran into some technical problems that I hope will be sorted out
soon; one of the strategies is to run these software packages on a migration
basis – as the older computers in our lab won’t be able to run MacOS 10.4+,
we’ll start converting them to Linux.
3

Cygwin/X is a port of the X Window System to the Cygwin API layer for the Microsoft
Windows family of operating systems
Cygwin/x: X windows – on windows! http://x.cygwin.com/, 2014. [Online; accessed 5.8.2014]

276

I wanted to ask you about the relation between software and design.
To me, economy, working process, but also aesthetics are a product of
software, and at the same time software itself is shaped through use. I
think the borders between software and design are not so strictly drawn.

It’s funny you put things in that perspective. I couldn’t agree more.
Nevertheless I think that design thinking prevails (or it should) as it must
come first when approaching problems. If the design thinking is correct,
the tools used should be irrelevant. I say ‘should’ because in a perfect environment we could work within a team where all tools (software/hardware)
are mastered. Rarely this happens, so much of our design thinking is still
influenced by what we can actually produce.

Do you mean to say that what we can think is influenced by what we
can make? This would work for me! But often when tools are mastered,
they disappear in the background and in my opinion that can become a
problem.

I’m not sure if I follow your point. I agree with the border between design
and software is not so strict nevertheless, I don’t agree with economy, process
and aesthetics are a product of software. As you’ve come to say what we think
is influenced by what we can make ... this is an outside observation ...
A technique is produced inside a culture,
therefore one s society is conditioned by
it s techniques. Conditioned, not determined 4

Design, like economics and software, is a product of culture. Or is it
the other way around? The fact is that we can’t really tell what comes first.
Culture is defined by and defines technology. Therefore it’s more or less
simple to accept that software determines (and is determined) by it’s use.
This is an intricate process ... it kind of goes roundabout on itself ...
4

Pierre Lévy. Cyberculture (Electronic Mediations). University Of Minnesota Press, 2001

277

And where does design fit in in your opinion? Or more precisely:
designers?

Design is a cultural aspect. Therefore it does not escape this logic. Using
a practical standpoint: Design is a product of economics and technology.
Nevertheless the best design practices (or at least the one’s that have endured
the test of time) and the most renowned designers are the one’s that can
escape the the economic and technological boundaries. The best design
practices are the ones that are not products of economics and technology
... they are kind of approaching a universal design status (if one exists). Of
course ... it’s very theoretical, and optimistic ... but it should be like this ...
otherwise we’ll stop looking for better or newer solutions, and we’ll stop
pushing boundaries and design as technology and other areas will stagnate.
On the other hand, there is a special ‘school’ of thought manifested through
some of the Portuguese Design Association members, saying that the design
process should lead the process of technological development. Henrique
Cayate (I think it was in November last year) said that design should lead the
way to economy and technology in society. I think this is a bit far fetched ...

Do you think software defines form and/or content? How is software
related to design processes?
I think these are the essential questions related to the use of OSS. Can
we think about what we can make without thinking about process? I believe
that in design processes, as in design teaching, concepts should be separated
from techniques or software as much as possible.
To me, exactly because techniques and software are intertwined, software matters and should offer space for thinking (software should therefore not be separated from design). You could also say: design becomes
exceptionally strong when it makes use of its context, and responds to it
in an intelligent way. Or maybe I did not understand what you meant by
being ‘a product of ’. To me that is not necessarily a negative point.
Well ... yes ... that could be a definition of good design, I guess. I think
that as a cultural produce, techniques can’t determine society. It can and
will influence it, but at the same time it will also just happen. When we talk
278

about Design and Software I see the same principle reflected. Design being
the ‘culture’ or society and software being the tools or techniques that are
developed to be used by designers. So this is much the same as Which came
first? The chicken, or the egg? Looking at it from a designers (not a software
developers) point of view, the tools we use will always condition our output.
Nevertheless I think it’s our role as users to push tools further and let developers know what we want to do with them. Whether we do animation on
Photoshop, or print graphics on Flash that’s our responsibility. We have to
use our tools in a responsible way. Knowing that the use we make of them
will eventually come back at us. It’s a kind of responsible feedback.
Using Linux in a design environment is not an obvious choice. Most
designers are practically married to their Adobe Suite. How come it is
entering your school after all?

Very slowly! Linux is finally becoming valuable for Design/DTP area as
it has been for long on the Internet/Web and programming areas. But you
can’t expect GIMP to surpass Photoshop. At least not in the next few years.
And this is the reality. If we can, we must train our students to use the
best tools available. Ideally all tools available, so they won’t have problems
when faced with a tool professionally. The big question is still, how we
besides teaching students theory and design processes (with the help of free
tools), help them to become professionals. We also have to teach them
how to survive a professional relationship with professional tools like the
Adobe Suite. As I am certain that Linux and OSS (or F/LOSS) will be
part of education’s future, I am certain of it’s coexistence along side with
commercial software like Adobe’s. It’s only a matter of time. Being certain
of this, the essential question is: How will we manage to work parallel in
both commercial and free worlds?

Do you think it is at all possible to ‘survive’ on other tools than the
ones Adobe offers?

Well ... I seem not to be able to dedicate myself entirely to these new
tools ... To depend solely on OSS tools ... I think that is not possible, at
least not at this moment. But now is the time to take these OSS tools
and start to teach with them. They must be implemented in our schools.
279

I am certain that sooner or later this will be common practice throughout
European schools.
Can you explain a bit more, what you mean by ‘real world’?

Being a professional graphic designer is what we call the ‘real world’ in
our school. I mean, having to work full time doing illustration, corporate
identity, graphic design, etc., to make a living, deliver on time to clients and
make a profit to pay the bills by the end of the month!

Do you think OSS can/should be taught differently? It seems selfteaching is built in to these tools and the community around it. It means
you learn to teach others in fact ... that you actually have to leave the
concept of ‘mastering’ behind?
I agree. The great thing about Linux is precisely that – as it is developed
by users and for users – it is developing a sense of community around it, a
sense of given enough eyeballs, someone will figure it out.
Well, that does not always work, but most of the time ...

I believe that using Open Source tools is perfect to teach, especially
first year students. Almost no one really understands what the commands
behind the menus of Photoshop mean, at least not the people I’ve seen in
my workshops. I guess GIMP won’t resolve this matter, but it will help
them think about what they are doing to digital images. Especially when
they have to use unfamiliar software. You first have to teach the design
process and then the tool can be taught correctly, otherwise you’ll just be
teaching habits or tricks. As I said before, as long as design prevails and not
the tool/technique, and you teach the concepts behind the tools in the right
way, people will adapt seamlessly to new tools, and the interface will become
invisible!

Do you think this means you will need to restructure the curriculum?
I imagine a class in bugreporting ... or getting help online ...

mmhh ... that could be interesting. I’ve never thought about it in that
way. I’ve always seen bugreporting and other community driven activities
280

as part of the individual aspect of working with these tools ... but basically
you are suggesting to implement an ‘Open Source civic behavior class’ or
something like that?

Ehm ... Yes! I think you need to learn that you own your tools, meaning
you need to take care of them (ie: if something does not work, report)
but at the same time you can open them up and get under the hood ...
change something small or something big. You also need to learn that
you can expect to get help from other people than your tutor ... and that
you can teach someone else.

The aspect of taking responsibility, this has to be cultivated – a responsible use of these tools. About changing things under the hood ... well this I
think it will be more difficult. I think there is barely space to educate people to hack their own tools let alone getting under the hood and modifying
them. But you are right that under the OSS communication model, the
peer review model of analysis, communication is getting less and less hierarchical. You don’t have to be an expert to develop new or powerful tools or
other things ... A peer-review model assumes that you just need to be clever
and willing to work with others. As long as you treat your collaborators
as peers, whether or not they are more or less advanced than you, this will
motivate them to work harder. You should not disregard their suggestions
and reward them with the implementations (or critics) of their work.

How does that model become a reality in teaching? How can you
practice this?

Well ... for example use public communication/distribution platforms
(like an expanded web forum) inside school, or available on the Internet;
blog updates and suggestions constantly; keep a repository of files; encourage the use of real time communication technologies ... as you might have
noticed is almost the formula used in e-learning solutions.
And also often an argument for cutting down on teaching hours.

That actually is and isn’t true. You can and will (almost certainly) have
less and less traditional classes, but if the teachers and tutors are dedicated,
281

they will be more available than ever! This will mean that students and
teachers will be working together in a more informal relationship. But it
can also provoke an invasion of the personal space of teachers ...
It is hard to put a border when you are that much involved. I am
just thinking how you could use the community around Open Source
software to help out. I mean ... if the online teaching tools would be
open to others outside the school too, this would be the advantage. It
would also mean that as a school, you contribute to the public domain
with your classes and courses.

That is another question. I think schools should contribute to public
domain knowledge. Right now I am not sharing any of the knowledge
about implementing OSS on a school like ours with the community. But
if all goes well I’ll have this working by December 2006. I’m working on
a website where I can post the handbooks for workshops and other useful
resources.
I am really curious about your experiences. However convinced I am
of the necessity to do it, I don’t think it is easy to open education up to
the public, especially not for undergraduate education.

I do have my doubts too. If you look at it on a commercial perspective,
students are paying for their education ... should we share the same content
to everyone? Will other people explore these resources in a wrong way?
Will it really contribute to the rest of the community? What about profit?
Can we afford to give this knowledge away for free, I mean, as a school this
is almost our only source of income? Will the prestige gained, be worth
the possible loss? These are important questions that I need to think more
about.

OK, I will be back with you in 6 month to find out more! My last question ... why would you invest time and energy in OSS when you think
good designers should escape economical and technological boundaries?
If we invest energy on OSS tools now, we’ll have the advantage of already
being savvy by the time they become widely accepted. The worst case scenario would be that you’ve wasted time perfecting your skills or learned a
282

new tool that didn’t become a standard ... How many times have we done
this already in our life? In any way, we need to learn concepts behind
the tools, learn new and different tools, even unnecessary ones in order to
broaden our knowledge base – this will eventually help us think ‘out of the
box’ and hopefully push boundaries further [not so much as escaping them].
For me OSS and its movement have reached a maturity level that can prove
it’s own worth in society. Just see Firefox – when it reached general user
acceptance level (aka ‘project maturity’ or ‘development state’), they started
to compete directly with MS Internet Explorer. This will happen with the
rest (at least that’s what I believe). It’s a matter of quality and doing the
correct broadcast to the general public. Linux started almost as a personal
project and now it’s a powerhouse in programming or web environments.
Maybe because these are areas that require constant software and hardware
attention it became an obvious and successful choice. People just modified it
as they needed it done. Couldn’t this be done as effectively (or better) with
commercial solutions? Of course. But could people develop personalized
solutions to specific problems in their own time frame? Probably not ... But
it means that the people involved are, or can resource to, computer experts.
What about the application of these ideas to other areas? The justice department of the Portuguese government (Ministério da Justiça) is for example
currently undergoing a massive informatics (as in the tools used) change –
they are slowly migrating their working platform to an Open Source Linux
distribution – Caixa Mágica (although it’s maintained and given assistance
by a commercial enterprise by the same name). By doing this, they’ll cut
costs dramatically and will still be able to work with equivalent productivity
(one hopes: better!). The other example is well known. The Spanish region of Estremadura looked for a way to cut costs on the implementation
of information technologies in their school system and developed their own
Linux Distro called Linex – it aggregates the software bundle they need,
and best of all has been developed and constantly tweaked by them. Now
Linux is becoming more accessible for users without technical training, and
is in a WYSIWYG state of development, I really believe we should start
using it seriously so we can try and test it and learn how we can use in in
our everyday life (for me this process has already started ... ). People aren’t
stupid. They’re just ‘change resistant’. One of the aspects I think that will
get peoples’ attention will be that a ‘free beer’ is as good as a commercial
one.
283

August 2006. One of the original co-conspirators of the
OSP adventure is the Brussels graphiste going under the
name Harrisson. His interest in Open Source software
flows with the culture of exchange that keeps the offcentre music scene alive, as well as with the humanist
tradition persistingly present in contemporary typography. Harrisson’s visual frame of reference is eclectic and
vibrant, including modernist giants, vernacular design,
local typographic culture, classic painting, drawing and
graffiti. Too much food for one conversation.

FS
H

FS
H

FS

H

You could say that ‘A typeface is entirely derivative’, but others argue, that maybe
the alphabet is, but not the interpretations of it.

The main point of typography and ownership today is that there is a blurred
border between language and letters. So: now you can own the ‘shape’ of
a letter. Traditionally, the way typographers made a living was by buying
(more or less expensive) lead fonts, and with this tool they printed books
and got paid for that. They got paid for the typesetting, not for the type.
That was the work of the foundries. Today, thanks to the digital tools, you
can easily switch between type design, type setting and graphic design.

What about the idea that fonts might be the most ‘pirated’ digital object possible?
Copying is much more difficult when you’ve got lead type to handle!

Yes, digitalisation changed the rules. Just as .mp3 changed the philosophy
of music. But in typography, there is a strange confrontation between this
flux of copied information, piracy and old rules of ownership from the past.

Do you think the culture of sharing fonts changed? Or: the culture of distributing
them? If you look at most licences for fonts, they are extremely restrictive. Even
99% of free fonts do not allow derivative works.

The public good culture is paradoxically not often there. Or at least the
economical model of living with public good idea is not very developed.
While I think typography, historically, is always seen as a way to share
knowledge. Humanist stuff.

287

The art and craft of typeface design is
currently headed for extinction due to the
illegal proliferation of font software,
piracy, and general disregard for proper
licensing etiquette. 1

H
FS

H

FS

Emigré ... Did they not live
from the copyrights of fonts?!

You are right.
They are
like a commercial record company. Can you imagine what
would happen if you would
open up the typographic trade
– to ‘Open Source’ this economy? Stop chasing piracy and
allow users to embed, study,
copy, modify and redistribute
typefaces?

Well we are not that far from
this in fact. Every designer
has at least 500 fonts on their
computer, not licenced, but
copied because it would be impossible to pay for!

Even the distribution model of fonts is very peer-to-peer as well. The reality
might come close, but font licences tell a different story.
I believe that we live in an era where
anything that can be expressed as bits
will be. I believe that bits exist to
be copied. Therefore, I believe that any
business-model that depends on your bits
not being copied is just dumb, and that
lawmakers who try to prop these up are like
governments that sink fortunes into protecting people who insist on living on the
sides of active volcanoes. 2

1
2

http://redesign.emigre.com/FAQ.php
Cory Doctorow in http://craphound.com/bio.php

288

H
FS

H

FS
H

FS

H

FS

I am not saying all fonts should be open, but it is just that it would be interesting
when type designers were testing and experimenting with other ways of developing
and distributing type, with another economy.

Yes, but fonts have a much more reduced user community than music or
bookpublishing, so old rules stay.

Is that it? I am surprised to see that almost all typographers and foundries take the
‘piracy is a crime’ side on this issue. While typographers are early and enthusiastic
adopters of computer technology, they have not taken much from the collaborative
culture that came with it.
This is the ‘tradition’ typography inherited. Typography was one of the
first laboratories for fractioning work for efficiency. It was one of the first
modern industries, and has developed a really deep culture where it is not
easy to set doubts in. 500 years of tradition and only 20 years of computers.
The complexity comes from the fact it is influenced by a multiple series of
elements, from history and tradition to the latest technologies. But it is
always related to an economic production system, so property and ‘secretsof-the-trade’ have a big influence on it.

I think it is important to remember how the current culture of (not) sharing fonts
is linked to its history. But books have been made for quite a while too.
Open Source systems may be not so much influencing distribution, licences
and economic models in typography, but can set original questions to this
problematic of digital type. Old tools and histories are not reliable anymore.

Yes. with networked software it is rather obvious that it is useful to work together.
I try to understand how this works with respect to making a font. Would that
work?

Collaborative type is extremely important now, I think. The globalisation of
computer systems sets the language of typography in a new dimension. We
use computers in Belgium and in China. Same hardware. But language is
the problem! A French typographer might not be the best person to define
a Vietnamese font. Collaborativity is necessary! Pierre Huyghebaert told me
he once designed an Arabic font when he was in Lebanon. For him, the
font was legible, but nobody there was able to read it.
But how would you collaborate than? I mean ... what would be the reason for
a French typographer to collaborate with one from China? What would that
bring? I’m imagining some kind of hybrid result ... kind of interesting.
289

H

FS
H

FS
H

Again, sharing. We all have the idea that English is the modern Latin,
and if we are not careful the future of computers will result in a language
reductionism.

What interest me in Open Source, is the potential for ‘biodiversity’.

I partially agree, and the Open Source idea contradicts the reductionist
approach by giving more importance to local knowledge. A collaboration
between an Arabic typographer and a French one can be to work on tools
that allow both languages to co-exist. LaTeX permits that, for example.
Not QuarkXpress!

Where does your interest in typography actually come from?

I think I first looked at comic books, and then started doodling in the
margins of schoolbooks. As a teenager, I used to reproduce film titles such
as Aliens, Terminator or other sci-fi high-octane typographic titles.

Basically, I’m a forger! In writing, you need to copy to understand. Thats an
old necessity. If you use a typeface, you express something. You’re putting
drawings of letters next to each other to compose a word/text. A drawing
is always emotionally charged, which gives color (or taste) to the message.
You need to know what’s inside a font to know what it expresses.

FS
H

How do you find out what’s inside?

By reproducing letters, and using them. A Gill Sans does not have the same
emotional load as a Bodoni. To understand a font is complicated, because
it refers to almost every field in culture. The banners behind G.W. Bush
communicate more than just ‘Mission Accomplished’. Typefaces carry a
‘meta language’.
290

FS
H

FS
H

It is truly embedded content

Exactly! It is still very difficult to bridge the gap between personal emotions
and programming a font. Moreover, there are different approaches, from
stroke design to software that generates fonts. And typography is standardisation. The first digital fonts are drawn fixed shapes, letter by letter,
‘outstrokes’. But there is another approach where the letters are traced by
the computer. It needs software to be generated. In Autocad, letters are
‘innerstroke’ that can vary of weight. Letterrors’ Beowolf 3 is also an example of that kind of approach. interesting way to work, but the font depends
on the platform it goes with. Beowolf only works on OS9. It also set the
question of copyright very far. It’s a case study in itself.

So it means, the font is software in fact?

Yes, but the interdependence between font
and operating systems is very strong, contrary to a fixed format such as TrueType.
For printed matter, this is much more
complicated to achieve. There are inbetween formats, such as Multiple Master
Technology for example. It basically
means, that you have 2 shapes for 1 glyph,
and you can set an ‘alternative’ shape between the 2 shapes. At Adobe they still do
not understand why it was (and still is) a
failure ...
3

Beowolf by Just van Rossum and Erik van Blokland (1989)

Instead of recreating a fixed outline or bitmap, the Randomfont redefines its outlines every
time they are called for. http://letterror.com/writing/is-best-really-better

291

The Metapolator Uinverse by Simon Egli (2014)

FS

H

FS

I really like this idea ... to have more than one master. Imagine you own one
master and I own the other and than we adjust and tweak from different sides.
That would be real collaborative type! Could ‘multiple’ mean more than one you
think?

It is a bit more complicated than drawing a simple font in Fontographer or
Fontforge. Pierre told me that the MM feature is still available in Adobe
Illustrator, but that it is used very seldomly. Multiple Master fonts are also
a bit complicated to use. I think there were a lot of bugs first, and then you
need to be a skilled designer to give these fonts a nice render. I never heard
of an alternative use of it, with drawing or so. In the end it was probably
never a success because of the software dependency.

While I always thought of fonts as extremely cross media. Do you remember which
classic font was basically the average between many well-known fonts? Frutiger?
292

H

FS

H
FS

Fonts are Culture Capsules! It was Adrian Frutiger. But he wasn’t the only
one to try ... It was a research for the Univers font I think. Here again we
meet this paradox of typography: a standardisation of language generating
cultural complexity.

Univers. That makes sense. Amazing to see those examples
together. It seems digital typography got stuck at some
point, and I think some of the ideas and practices that are
current in Open Source could help break out of it.
Yes of course. And it is almost virgin space.

In 2003 the Danish government released ‘Union’, a
font that could be freely used for publications concerning
Danish culture. I find this an intrigueing idea, that a font
could be seen as some kind of ‘public good’.

Univers by Adrian Frutiger (1954)

Union by Morten Rostgaard Olsen (2003)

H

FS

H

I am convinced that knowledge needs to be open ...
(speaking as the son of a teacher here!). One medium
for knowledge is language and its atoms are letters.

But if information wants to be free, does that mean that
design needs to be free too? Is there information possible
without design?

This is why I like books. Because it’s a mix between
information and beauty – or can be. Pfff, there is nothing without design
... It is like is there something without language, no?

293

One of the things that is notable about
OSP is that the problems that you encounter
are also described, appearing on your blog.
This is something unusual for a company attempting to produce the impression of an
efficient solution . Obviously the readers
of the blog only get a formatted version
of this, as a performed work? What s the
thinking here?"

This interview about the practice of OSP was carried out by
e-mail between March and May 2008. Matthew Fuller writes
about software culture and has a contagious interest in technologies that exceed easy fit solutions. At the time, he was
David Gee reader in Digital Media at the Centre for Cultural
Studies, Goldsmiths College, University of London, and had
just edited Software Studies, A Lexicon, 1 and written Media
Ecologies: Materialist Energies in Art and Technoculture 2 and
Behind the Blip: Essays on the Culture of Software. 3

OSP is a graphic design agency working solely with Open Source software. This
surely places you currently as a world first, but what exactly does it mean in
practice? Let’s start with what software you use?

There are other groups publishing with Free Software, but design collectives
are surprisingly rare. So much publishing is going on around Open Source
and Open Content ... someone must have had the same idea? In discussions
about digital tools you begin to find designers expressing concern over the
fact that their work might all look the same because they use exactly the
same Adobe suite and as a way to differentiate yourself, Free Software could
soon become more popular. I think the success of Processing is related
to that, though I doubt such a composed project will ever make anyone
seriously consider Scribus for page layout, even if Processing is Open Source.
1

2
3

Matthew Fuller. Software Studies: A Lexicon. The MIT Press, 2008
Matthew Fuller. Media Ecologies: Materialist Energies in Art and Technoculture.
The MIT Press, 2007
Matthew Fuller. Behind the Blip: Essays on the culture of software. Autonomedia, 2003

297

OSP usually works between GIMP, 4 Scribus 5 and Inkscape 6 on Linux distributions and OSX. We are fans of FontForge, 7 and enjoy using all kinds
of commandline tools, psnup, ps2pdf and uniq to name a few.
How does the use of this software change the way you work, do you see some
possibilities for new ways of doing graphic design opening up?

For many reasons, software has become much more present in our work; at
any moment in the workflow it makes itself heard. As a result we feel a bit
less sure of ourselves, and we have certainly become slower. We decided to
make the whole process into some kind of design/life experiment and that
is one way to keep figuring out how to convert a file, or yet another discussion with a printer about which ‘standard’ to use, interesting for ourselves.
Performing our practice is as much part of the project as the actual books,
posters, flyers etc. we produce.
One way a shift of tools can open up new ways of doing graphic design, is
because it makes you immediately aware of the ‘resistance’ of digital material. At the point we can’t make things work, we start to consider formats,
standards and other limitations as ingredients for creative work. We are
quite excited for example about exploring dynamic design for print in SVG,
a by-product of our battle with converting files from Scalable Vector Format
into Portable Document Format.
Free Software allows you to engage on many levels with the technologies
and processes around graphic design. When you work through it’s various
interfaces, stringing tools together, circumventing bugs and/or gaps in your
own knowledge, you understand there is more to be done than contributing
code in C++. It is an invitation to question assumptions of utility, standards
and usability. This is exactly the stuff design is made of.

Following this, what kind of team have you built up, and what new competencies
have you had to develop?

The core of OSP is five people 8 , and between us we mix amongst others typography, layout, cartography, webdesign, software development, drawing,
4
5
6
7
8

image manipulation
page layout
vector editing
font editor
Pierre Huyghebaert, Harrisson, Yi Jiang, Nicolas Malevé and me

298

programming, open content licensing and teaching. Around it is a larger
group of designers, a mathematician, a computer scientists and several Free
Software coders that we regularly exchange ideas with.
It feels we often do more unlearning than learning; a necessary and interesting skill to develop is dealing with incompetence – what can it be else than
a loss of control? In the mean time we expand our vocabulary so we can fuel
conversations (imaginary and real life) with people behind GIMP, Inkscape,
Scribus etc.; we learn how to navigate our computers using commandline
interfaces as well as KDE, GNOME and others; we find out about file formats and how they sometimes can and often cannot speak to each other;
how to write manuals and interact with mailing lists. The real challenge is
to invent situations that subvert strict divisions of labour while leaving space
for the kind of knowledge that comes with practice and experience.
Open fonts seem to be the beginnings of a big success, how does it fit into the
working practices of typographers or the material with which they work?

Type design is an extraordinary area where Free Software and design naturally meet. I guess this area of work is what kernel coding is for a Linux
developer: only a few people actually make fonts but many people use them
all the time. Software companies have been inconsistent in developing proprietary tools for editing fonts, which has made the work of typographers
painfully difficult at times. This is why George Williams decided to develop
FontForge, and release it under a BSD license: even if he stops being interested, others can take over. FontForge has gathered a small group of fans
who through this tool, stay into contact with a more generous approach to
software, characters and typefaces.
The actual material of a typeface has since long migrated from poisonous
lead into sets of ultra light vector drawings, held together in complicated
kerning systems. When you take this software-like aspect as a startingpoint,
many ways to collaborate (between programmers and typographers; between
people speaking different languages) open up, as long as you let go of the
uptight licensing policies that apply to most commercial fonts. I guess the
image of the solitary master passing on the secret trade to his devoted pupils
does not sit very well with the invitation to anyone to run, copy, distribute,
study, change and improve. How open fonts could turn the patriarchal guild
299

system inside out that has been carefully preserved in the closed world of
type design, is obviously of interest as well.
Very concretely, computer-users really need larger character sets that allow
for communication between let’s say Greek, Russian, Slovak and French.
These kinds of vast projects are so much easier to develop and maintain in
a Free Software way; the DéJàVu font project shows that it is possible to
work with many people spread over different countries modifying the same
set of files with the help of versioning systems like CVS.
But what it all comes down to probably ... Donald Knuth is the only person
I have seen both Free Software developers and designers wear on their Tshirts.

The cultures around each of the pieces of software are quite distinct. People
often lump all F/LOSS development into one kind of category, whereas even in
the larger GNU/Linux distros there is quite a degree of variation, but with the
smaller more specialised projects this is perhaps even more the case. How would
you characterise the scenes around each of these applications?

The kinds of applications we use form a category in themselves. They are
indeed small projects so ‘scene’ fits them better than ‘culture’. Graphics
tools differ from archetypal Unix/Linux code and language based projects
in that Graphical User Interfaces obviously matter and because they are used
in a specialised context outside its own developers circle. This is interesting because it makes F/LOSS developer communities connect with other
disciplines (or scenes?) such as design, printing and photography.
A great pleasure in working with F/LOSS is to experience how software
can be done in many ways; each of the applications we work with is alive
and particular. I’ll just portray Scribus and Inkscape here because from the
differences between these two I think you can imagine what else is out there.
The Scribus team is rooted in the printing and pre-press world and naturally
their first concern is to create an application that produces reliable output.
Any problem you might run in to at a print shop will be responded to
immediately, even late night if necessary. Members of the Scribus team are
a few years older than average developers and this can be perceived through
the correct and friendly atmosphere on their mailing list and IRC channel,
and their long term loyalty to this complex project. Following its more
industrial perspective, the imagined design workflow built in to the tool is
300

linear. To us it feels almost pre-digital: tasks and responsibilities between
editors, typesetters and designers are clearly defined and lined up. In this
view on design, creative decisions are made outside the application, and the
canvas is only necessary for emergency corrections. Unfortunately for us,
who live of testing and trying, Scribus’ GUI is a relatively underdeveloped
area of a project that otherwise has matured quickly.
Inkscape is a fork of a fork of a small tool initially designed to edit vector
files in SVG format. It stayed close to its initial starting point and is in a way
a much more straightforward project than Scribus. Main developer Bryce
Harrington describes Inkscape as a relatively unstructured coming and going
of high energy collective work much work is done through a larger group of
people submitting small patches and it’s developers community is not very
tightly knit. Centered around a legible XML format primarily designed
for the web, Inkscape users quickly understand the potential of scripting
images and you can find a vibrant plug in culture even if the Inkscape code
is less clean to work with than you might expect. Related to this interest
in networked visuals, is the involvement of Inkscape developers in the Open
Clip Art project and ccHost, a repository system wich allows you to upload
images, sounds and other files directly from your application. It is also no
surprise that Inkscape implemented a proper print dialogue only very late,
and still has no way to handle CMYK output.
There’s a lot of talk about collaboration in F/LOSS development, something
very impressive, but often when one talks to developers of such software there is
a lot to discuss about the rather less open ways in which power struggles over the
meaning or leadership of software projects are carried out by, for instance, hiding
code in development, or by only allowing very narrowly technical approaches to
development to be discussed. This is only one tendency, but one which tends to
remain publicly under-discussed. How much of this kind of friction have you
encountered by acting as a visible part of a new user community for F/LOSS?

I can’t say we feel completely at home in the F/LOSS world, but we have not
encountered any extraordinary forms of friction yet. We have been allowed
the space to try our own strategies at overcoming the user-developer divide:
people granted interviews, accepted us when we invited ourselves to speak
at conferences and listened to our stories. But it still feels a bit awkward,
and I sometimes wonder whether we ever will be able to do enough. Does
301

constructive critique count as a contribution, even when it is not delivered
in the form of a bug report? Can we please get rid of the term ‘end-user’?
Most discussions around software are kept strictly technical, even when
there are many non-technical issues at stake. We are F/LOSS enthusiasts
because it potentially pulls the applications we use into some form of public
space where they can be examined, re-done and taken apart if necessary; we
are curious about how they are made because of what they (can) make you
do. When we asked Andreas Vox, a main Scribus developer whether he saw
a relation between the tool he contributed code to, and the things that were
produced by it, he answered: Preferences for work tools and political preference
are really orthogonal. This is understandable from a project-management
point of view, but it makes you wonder where else such a debate should take
place.
The fact that compared to proprietary software projects, only a very small
number of women is involved in F/LOSS makes apparent how openness
and freedom are not simple terms to put in practice. When asked whether
gender matters, the habitual answer is that opportunities are equal and from
that point a constructive discussion is difficult. There are no easy solutions,
but the lack of diversity needs to be put on the roadmap somehow, or as a
friend asked: Where do I file a meta-bug?
Visually, or in terms of the aesthetic qualities of the designs you have developed
would you say you have managed to achieve anything unavailable through the
output of the Adobe empire?

The members of OSP would never have come up with the idea to combine
their aesthetics and skills using Adobe, so that makes it difficult to do a
‘before’ and ‘after’ comparison. Or maybe we should call this an achievement
of Free Software too?
Using F/LOSS has made us reconsider the way we work and sometimes this
is visible in the design we produce, more often in the commissions we take
on or the projects we invest in. Generative work has become part of our
creative suite and this certainly looks different than a per-page treatment;
also deliberate traces of the production process (including printing and prepress) add another layer to what we make.
Of all smaller and larger discoveries, the Spiro toolkit that Free Software
activist, Ghostscript maintainer, typophile and Quaker Raph Levien devel302

ops, must be the most wonderful. We had taken Bézier curves for granted,
and never imagined how the way it is mathematically defined would matter
that much. Instead of working with fixed anchor points and starting from
straight lines that you first need to bend, Spiro is spiral-based and vectors
suddenly have a sensational flow and weight. From Pierre Bézier writing his
specification as an engineer for the Renault car factory to Levien’s Spiro,
digital drawing has changed radically.

You have a major signage project coming up, how does this commission map across
to the ethics and technologies of F/LOSS?

We are right in the middle of it. At this moment ‘The Pavilion of Provisionary
Happiness’ celebrating the 50th anniversary of the Belgian World Exhibition,
is being constructed out of 30.000 beer crates right under the Brussels’
Atomium. That’s a major project done the Belgian way.
We have developed a signage system, or actually a typeface, which is defined
through the strange material and construction work going on on site. We
use holes in the facade that are in fact handles of beer crates as connector
points to create a modular font that is somewhere between Pixacao graffiti
and Cuneiform script. It is actually a play on our long fascination with
engineered typefaces such as DIN 1451; mixing universal application with
specific materials, styles and uses – this all links back to our interest in Free
Software.
Besides producing the signage, OSP will co-edit and distribute a modest
publication documenting the whole process; it makes legible how this temporary yellow cathedral came about. And the font will of course be released
in the public domain.
It is not an easy project but I don’t know how much of it has to do with
our software politics; our commissioners do not really care and also we have
kept the production process quite simple on purpose. But by opening our
sources, we can use the platform we are given in a more productive way; it
makes us less dependent because the work will have another life long after
the deadline has passed.
On this project, and in relation to the seeming omnipresence in F/LOSS of the
idea that this technology is ‘universal’, how do you see that in relation to fonts,
and their longer history of standards?
303

That is indeed a long story, but I’ll give it a try. First of all, I think the idea
of universal technology appears to be quite omnipresent everywhere; the
mix-up between ubiquitousness and ‘universality’ is quickly made. In Free
Software this idea gains force only when it gets (con)fused with freedom
and openness and when conditions for access are kept out of the discussion.
We are interested in early typographic standardization projects because their
minimalist modularity brings out the tension between generic systems and
specific designs. Ludwig Goller, a Siemens engineer wo headed the Committee
for German Industry Standards in the 1920s stated that For the typefaces of
the future neither tools nor fashion will be decisive. His committee supervised
the development of DIN 1451, a standard font that should connect economy of use with legibility, and enhance global communication in service of
the German industry. I think it is no surprise that a similar phrasing can be
found in W3C documents; the idea to unify the people of the world through
a common language re-surfaces and has the same tendency to negate materiality and specificity in favour of seamless translation between media and
markets.
Type historian Ellen Lupton brought up the possibility of designing typographic systems that are accessible but not finite nor operating within a
fixed set of parameters. Although I don’t know what she means by using the
term ‘open universal’, I think this is why we are attracted to Free Software:
it has the potential to open up both the design of parameters as well as their
application. Which leads to your next question.
You mentioned the use of generative design just now. How far do you go into
this? Within the generative design field there seem to be a couple of tendencies, one
that is very pragmatic, simply about exploring a space of possible designs through
parametric definition in order to find, select and breed from and tweak a good
result that would not be necessarily imaginable otherwise, the other being more
about the inefible nature of the generative process itself, something vitalist. These
tendencies of not of course exclusive, but how are they inflected or challenged in
your use of generative techniques?

I feel a bit on thin ice here because we only start to explore the area and we
are certainly not deep into algorithmic design. But on a more mundane level
... in the move from print to design for the web, ‘grids’ have been replaced by
‘templates’ that interact with content and context through filters. Designers
304

have always been busy with designing systems and formats, 9 but stepped in
to manipulate singular results if necessary.
I referred to ‘generative design’ as the space opening up when you play
with rules and their affordances. The liveliness and specificity of the work
results from various parameters interfering with each other, including the
ones we can get our hands on. By making our own manipulations explicit,
we sometimes manage to make other parameters at play visible too. Because
at the end of the day, we are rather bored by mysterious beauty.

One of the techniques OSP uses to get people involved with the process and the
technologies is the ‘Print Party’, can you say what that is?

‘Print Parties’ are irregular public performances we organise when we feel
the need to report on what we discovered and where we’ve been; as antiheroes of our own adventures we open up our practice in a way that seems
infectious. We make a point of presenting a new experiment, of producing
something printed and also something edible on site each time; this mix of
ingredients seems to work best. ‘Print Parties’ are how we keep contact with
our fellow designers who are interested in our journey but have sometimes
difficulty following us into the exotic territory of BoF, Version Control and
GPL3.

You state in a few texts that OSP is interested in glitches as a productive force in
software, how do you explain this to a printer trying to get a file to convert to the
kind of thing they expect?
Not! Printing has become cheap through digitization and is streamlined to
the extreme. Often there is literally no space built in to even have a second
look at a differently formatted file, so to state that glitches are productive
is easier said than done. Still, those hickups make processes tangible, especially at moments you don’t want them to interfere.
For a book we are designing at the moment, we might partially work by
hand on positive film (a step now also skipped in file-to-plate systems). It
makes us literally sit with pre-press professionals for a day and hopefully we
can learn better where to intervene and how to involve them into the process.
To take the productive force of glitches beyond predictable aesthetics, means

9

it really made me laugh to think of Joseph Müller Brockman as vitalist

305

most of all a shift of rhythm – to effect other levels than the production
process itself. We gradually learn how our ideas about slow cooking design
can survive the instant need to meet deadlines. The terminology is a bit
painful but to replace ‘deadline’ by ‘milestone’, and ‘estimate’ by ‘roadmap’
is already a beginning.

One of the things that is notable about OSP is that the problems that you encounter are also described, appearing on your blog. This is something unusual
for a company attempting to produce the impression of an efficient ‘solution’.
Obviously the readers of the blog only get a formatted version of this, as a performed work? What’s the thinking here?

‘Efficient solutions’ is probably the last thing we try to impress with, though
it is important for us to be grounded in practice and to produce for real
under conventional conditions. The blog is a public record of our everyday
life with F/LOSS; we make an effort to narrate through what we stumble
upon because it helps us articulate how we use software, what it does to us
and what we want from it; people that want to work with us, are somehow
interested in these questions too. Our audience is also not just prospective
clients, but includes developers and colleagues. An unformatted account,
even if that was possible, would not be very interesting in that respect; we
turn software into fairytales if that is what it takes to make our point.
In terms of the development of F/LOSS approaches in areas outside software,
one of the key points of differentiation has been between ‘recipes’ and ‘food’, bits
and atoms, genotype and phenotype. That is that software moves the kinds of
rivalry associated with the ownership and rights to use and enjoy a physical object
into another domain, that of speed and quality of information, which network
distribution tends to mitigate against. This is also the same for other kinds of
data, such as music, texts and so on. (This migration of rivalry is often glossed
over in the description of ‘goods’ being ‘non-rivalrous’.) Graphic Design however
is an interesting middle ground in a certain way in that it both generates files of
many different kinds, and, often but not always, provides the ‘recipes’ for physical
objects, the actual ‘voedingstof ’, such as signage systems, posters, books, labels and
so on. Following this, do you circulate your files in any particular way, or by
other means attempt to blur the boundary between the recipe and the food?
306

We have just finished the design of a font (NotCourier-sans), a derivative of
Nimbus Mono, which is in turn a GPL’ed copy of the well known Courier
typeface that IBM introduced in 1955. Writing a proper licence for it,
opened up many questions about the nature of ‘source code’ in design, and
not only from a legalist perspective. While this is actually relatively simple
to define for a font (the source is the object), it is much less clear what it
means for a signage system or a printed book.
One way we deal with this, is by publishing final results side by side with ingredients and recipes. The raw files themselves seem pretty useless once the
festival is over and the book printed, so we write manuals, stories, histories.
We also experiment with using versioning systems, but the softwares available are only half interesting to us. Designed to support code development,
changes in text files can be tracked up to the minutest detail but unless you
are ready to track binary code, images and document layouts function as
black boxes. I think this is something we need to work on because we need
better tools to handle multiple file formats collaboratively, and some form
of auto-documentation to support the more narrative work.
On the other hand, manuals and licences are surprisingly rich formats if you
want to record how an object came into life; we often weave these kinds
of texts back into the design itself. In the case of NotCourierSans we will
package the font with a pdf booklet on the history of the typeface – mixing
design geneology with suggestions for use.
I think the blurring of boundaries happens through practice. Just like
recipes are linked in many ways to food, 10 design practice connects objects
to conditions. OSP is most of all interested in the back-and-forth between
those two states of design; rendering their interdepence visible and testing
out ways of working with it rather than against it. Hopefully both the food
and the recipe will change in the process.

10

tasting, trying, writing, cooking

307

This brief interview with Ludivine Loiseau and Pierre Marchand
from OSP was made in December 2012 by editor and designer
Manuel Schmalstieg. It unravels the design process of Aether9,
a book based on the archives of a collaborative adventure exploring the danger zones of networked audio-visual live performance. The text was published in that same publication.
Can you briefly situate the collective work of Open Source Publishing
(OSP)?

OSP is a working group producing graphic design objects using only
Libre and/or Open Source software. Founded in 2006 in the frame of the
arts organisation Constant 1 , the OSP caravan consists today of a dozen
individuals of different backgrounds and practices.
Since how long are you working as a duo, and as a team in OSP?
3 to 4 years.

And how many books have you conceived?

As a team, it’s our first ‘real’ book. We previously worked together on a
somewhat similar project of archive exploration, but without printed material in
the end. 2
Similar in the type of content or in the process?

The process: we developed scripts to ‘scrap’ the project archives, but it’s output
was more abstract; we collected the fonts used in all the files and produced a graph
from this process. These archives weren’t structured, so the exploration was less
linear.
You rapidly chose TeX/ConTeXt as a software environment to produce
this book. Was it an obvious choice given the nature of the project, or did you
hesitate between different approaches?

The construction of the book focused on two axes/threads: chronology
and a series of ‘trace-route’ keywords. Within this approach of reading and
navigation using cross-references, ConTeXt appeared as an appropriate tool.
1
2

http://www.constantvzw.org
http://www.ooooo.be/interpunctie/

311

The world of TeX 3 is very intriguing, in particular for graphic designers.
It seems to me that it is always a struggle to push back the limits of what is
‘intended’ by the software.
ConTeXt is a constant fight! I wouldn’t say the same about other TeX
system instances. With ConTeXt, we found ourselves facing a very personal
project, because composition decisions are hardcoded to the liking of the package
main maintainer. And when we clash with these decisions, we are in the strange
position of using a tool while not agreeing with its builder.
As a concrete example, we could mention the automatic line spacing
adjustments. It was a struggle to get it right on the lines that include
keywords typeset with our custom ‘traced’ fonts. ConTeXt tried to do better,
and was increasing the line height of those words, as if it wanted to avoid
collisions.
Were you ever worried that what you wanted to obtain was not doable?
Did you reject some choices – in the graphic design, the layout, the structure
– because of software limitations?
Yes. Opting for a two column layout appeared to be quite tough when
filling in the content, as it introduced many gaps. At some point we decided
to narrow the format on a single column. To obtain the two columns
layout in the final output, the whole book was recomposed during the pdfconstruction, through OSPImpose.
This allowed us to make micro adjustments in the end of the production
process, while introducing new games, such as shifting the images on double pages.
What is OSPImpose?
It’s a re-writing of a pdf imposition software that I wrote a couple years ago
for PoDoFo.
Again regarding ConTeXt: this system was used for other OSP works
– notably for the book Verbindingen/Jonctions 10; Tracks in electr(on)ic
fields. 4 Is it currently the main production tool at OSP?
It’s more like an in-depth initiation journey!
But it hasn’t become a standard in our workflow yet. In fact, each
new important book layout project raises each time the question of the
3
4

a software written in 1978 by Donald Knuth
distinguished by the Fernand Baudin Prize 2009

312

tool. Scribus and LibreOffice (spreadsheet) are also part of our book making
toolbox.
During our work session with you at Constant Variable, we noticed
that it was difficult to install a sufficiently complete TeX/ConTeXt/Python
environment to be able to generate the book. Is Pierre’s machine still the only
one, or did you manage to set it up on other computers?

Now we all have similar setups, so it’s a generalized generation. But it’s true
that this represented a difficulty at some times.
The source code and the Python scripts created for the book are publicly
accessible on the OSP Git server. Would these sources be realistically reusable? Could other publication projects use parts of the code ? Or, without
any explicit documentation, would it be highly improbable?

Indeed, the documentation part is still on the to-do list. Yet a large part
of the code is quite directly reusable. The code allows to parse different types
of files. E-mails and chat-logs are often found in project archives. Here the
Python scripts allows to order them according to date information, and will
automatically assign a style to the different content fields.

The code itself is a documentation source, as much on concrete aspects, such
as e-mail parsing, than on a possible architecture, on certain coding motives, etc.
And most importantly, is consists in a form of common experience.
Do you think you will reuse some of the general functions/features of
archive parsing for other projects ?
Hard to say. We haven’t anything in perspective that is close to the Aether9
project. But for sure, if the need of such treatment comes up again, we’ll retrieve
these software components.
Maybe for a publication/compilation of OSP’s adventures.

Have there been ‘revelations’, discoveries of unsuspected Python/ConText
features during this development?

I can’t recall having this kind of pleasure. The revelation, at least from
my point of view, happened in the very rich articulation of a graphical intention enacted in programming objects. It remains a kind of uncharted territory,
exploring it is always an exciting adventure.
313

Three fonts are used in the book: Karla, Crimson and Consola Mono.
Three pretty recent fonts, born in the webfonts contexts I believe. What
considerations brought you to this choice?
Our typographical choices and researches lead us towards fonts with
different style variations. As the textual content is quite rich and spreads
on several layers, it was essential to have variation possibilities. Also, each
project brings the opportunity to test new fonts and we opted for recently
published fonts, indeed published, amongst others, on the Google font directory. Yet Karla and Crimson aren’t fonts specifically designed for a web
usage. Karla is one of the rare libre grotesque fonts, and it’s other specificity
it that it includes Tamil glyphs.
Apart from the original glyphs specially created for this book, you drew the
Ç glyph that was missing to Karla ... Is it going to be included to its official
distribution?
Oh, that’s a proposal for Jonathan Pinhorn. We haven’t contacted him
yet. For the moment, this cedilla has been snatched from the traced variant
collections.
Were there any surprises when printing? I am thinking in particular of
your choice of a colored ink instead of the usual black, or to the low res quality
(72dpi) of most of the images.
At the end of the process, the spontaneous decision to switch to blue ink was
a guaranteed source of surprise. We were confident that it wouldn’t destroy the
book, and we surely didn’t take too many risks since we were working with low
res images. But we weren’t sure how the images would react to such an offense. It
was an great surprise to see that it gave the book a very special radiance.
What are your next projects?
We are currently operating as an invited collective at the Valence Academy
of Fine Arts in the frame of a series of workshops named ‘Up pen down’.
We’re preparing a performance for the Balsamine theatre 5 on the topic of
Bootstrapping. In April we will travel as a group to Madrid to LGRU 6 and
LGM 7 . We also continually work on ‘Co-position”’, a project for building
a post-gutenberg typographical tool.
5
6
7

http://www.balsamine.be/
http://lgru.net/
the international Libre Graphics Meeting: http://libregraphicsmeeting.org/2013/

314

Performing Libre Graphics

In April 2014 I traveled from Leipzig to the north of
Germany to meet with artist Cornelia Sollfrank. It was
right after the Libre Graphics Meeting, and the impressions from the event were still very fresh. Cornelia had
asked me for a video interview as part of Giving what you
don’t have, 1 a series of conversations about what she refers
to as ‘complex copyright-critical practices’. She was interested in forms of appropriation art that instead of claiming
some kind of ‘super-user’ status for artists, might provide
a platform for open access and Free Culture not imaginable elsewhere. I’ve admired Cornelia’s contributions to
hacker culture for long. She pioneered as a cyberfeminist
in the 1990s with the hilarious and intelligent net-art piece
Female Extension 2 , co-founded Old Boys Network 3 and
developed seminal projects such as the Net Art Generator.
The opportunity to spend two sunny spring days with her
intelligence, humour and cyberfeminist wisdom could not
have come at a better moment.
What is Libre Graphics?

Libre Graphics is quite a large ecosystem of software tools; of people, people
that develop these tools but also people that use these tools; practices, like
how do you work with them, not just how do you make things quickly and
in an impressive way, but also how these tools might change your practice;
and cultural artifacts that result from it. It is all these elements that come
together, I would call Libre Graphics. The term ‘libre’ is chosen deliberately.
1
2
3

http://postmedialab.org/GWYDH
http://artwarez.org/femext/content/femextEN.html
http://www.obn.org/

319

Performing Libre Graphics

It is slightly more mysterious than the term ‘free’, especially when it turns up
in the English language. It sort of hints that there is something different,
something done on purpose. And it is also a group of people that are
inspired by Free Software culture, by Free Culture, by thinking about how
to share both their tools, their recipes and the outcomes of all this. Libre
Graphics goes in many directions. But it is an interesting context to work
in, that for me has been quite inspiring for a few years now.

The context of Libre Graphics

The context of Libre Graphics is multiple. I think that I am excited about
it and also part of why it is sometimes difficult to describe it in a short
sentence. The context is design, and people that are interested in design, in
creating visuals, animation, videos, typography ... and that is already multiple contexts, because each of these disciplines have their own histories,
and their own types of people that get touched by them. Then there is
software, people that are interested in the digital material. They say, I am
excited about raw bits and the way a vector gets produced. And that is a
very, almost formal, interest in how graphics are made. Then there is people that do software. They’re interested in programming, in programming
languages, in thinking about interfaces, and thinking about ways software
can become a tool. And then there are people that are interested in Free
Software. How can you make digital tools that can be shared, but also,
how can that produce processes that can be shared. Free Software activists
to people that are interested in developing specific tools for sharing design
and software development processes, like Git or Subversion, those kind of
things. I think the multiple contexts are really special and rich in Libre
Graphics.

Free Software culture

Free Software culture, and I use the term ‘culture’ because I am interested
in, let’s say, the cultural aspect of it, and this includes software. For me
software is a cultural object. But I think it is important to emphasize this,
320

Performing Libre Graphics

because it easily turns into a technocentric approach, which I think is important to stay away from. Free Software culture is the thinking that, when
you develop technology, and I am using technology in the sense that it is
cultural as well to me, deeply cultural, you need to take care as well of sharing the recipes, for how this technology has been developed. This produces
many different other tools, ways of working, ways of speaking, vocabularies, because it changes radically the way we make and the way we produce
hierarchies. It means for example, if you produce a graphic design artifact,
that you share all the source files that were necessary to make it; but you
also share as much as you can, descriptions or narrations of how it came to
be, which does include maybe how much was paid for it, where difficulties
were in negotiating with the printer; and what elements were included, because a graphic design object is usually a compilation of different elements;
what software was used to make it, and where it might have resisted. The
consequences of taking the Free Software culture serious in a design context, means that you care about all these different layers of the work, all the
different conditions that actually made the work happen.

Free Culture

The relationship from Libre Graphics to Free Culture is not always that
explicit. For some people it is enough to work with tools that are released
under a GPL, an open content licence. And there it stops. Even their work
will be released under proprietary licences. For others, it is important to
make the full circle and to think about what the legal status is of the work
they release. That is the more general one. Then, Free Culture, we can use
that very loosely, as in ‘everything that is circulating under conditions that
it can be reused and remade’. That would be my position. Free Culture
is of course also referred to a very specific idea of how that would work,
namely Creative Commons. For myself Creative Commons is problematic, although I value the fact that it exists and has really created a broader
discussion around licences in creative practices. I value that. For me the distinction Creative Commons makes for almost all the licences they promote,
between commercial and non-commercial work, and as a consequence, between professional and amateur work, I find that very problematic. Because
I think one of the most important elements of Free Software culture for me,
321

Performing Libre Graphics

is the possibility for people from different backgrounds, with different skill
sets, to actually engage with the digital artifacts they’re surrounded with.
By making this lazy separation between commercial and non-commercial,
which especially in the context of the web as it is right now, is not really
easy to hold up, seems really problematic. It creates an illusion of clarity
that I think actually makes more trouble than clarity. So I use Free Culture
licences, I use licences that are more explicit about the fact that anyone can
use whatever I produce in any context. Because I think that is where the
real power is of Free Software culture. For me Free Software licences and
all the licences that are around it, because I think there is many different
types and that is interesting, is that they have a viral power built in. So if
you apply a Free Software licence to, for example, a typeface, it means that
someone else, even someone else you don’t know, has the permission and
doesn’t have to ask for a permission, to reuse the typeface, to change it, to
mix it with something else, to distribute it and to sell it. That is one part,
that is already very powerful. But the real secret of such a licence is, that
once this person re-releases the typeface, it means that they need to keep
that same licence and it propagates across the network and that is where it
is really powerful.

Free tools

It is important to use tools that are released under conditions that allow
me to look further than its surface. For many reasons. There is an ethical
reason. It is very problematic I think, as a friend explained last week, to feel
that you’re renting a room in a hotel. That is often the way practitioners
nowadays relate to their tools. They have no right to move the furniture.
They have no right to invite friends to their hotel room. They have to check
out at eleven, etc. it is a very sterile relationship to the tools. That is one
part. The other is that there is little way to come into contact with the
cultural aspects of the tools. Some things that I suspected before starting
to use Free Software tools for my practice, but has been already for almost
ten years, continuously exciting, is the whole, let’s say, all the other elements
around it. The way people organize themselves in conferences, mailing lists,
the kinds of communication that happens, the vocabularies, the histories,
the connections between different disciplines ... And all that is available to
322

Performing Libre Graphics

look at, to work with, to come into contact with; to speak to people that do
these tools and ask them, why is it like this and not like that. And that to
me seems obvious that artists want to have that kind of layered relationship
with their tools, and not just only accept whatever comes out of next door
shop. I have a very different, almost different physical experience of these
tools, because I can enter on many levels. That makes them part of my
practice, not just means to an end. I really can take them into my practice.
That I find interesting, as an artist and as a designer.

Artifacts

The outcomes of this type of practice are different, or at least, let’s say, in
the kind of work I make, try to make and the people I like to work with.
There is obviously also groups of people that would like to do Hollywood
movies with those tools. That is kind of interesting, that that happens.
For me somehow the technological context or conditions that made a work
possible, will always occur in the final result. So, that is one part. And
the other is that the product is never the end. It means that in whatever
way source materials will be released, will be made available, it means that
a product is always the beginning of another product, either by me or by
other people. I think that is two things that you can always see in the kind
of works we make when we do libre-graphics-my-style. When we make a
book, for example, what is already different, is when we start the process, it
is not yet defined what tool we will use. There is a whole array of tools you
can choose from. I mean, books are basically text on paper, and there are
many ways to arrive at that output. For one book we did a few years ago,
we decided for the first time, because we had never used this tool before,
to use TeX, a typesetting system that is developed by Donald Knuth in the
context of academic publishing. That has been around as an almost mythological solution for a perfect typesetting. We were curious about whether
we could use that system that is developed in a very specific context for an
art catalog that we wanted to make. We had to learn how to use this tool,
which meant that we somehow had to learn the vocabulary, understand its
sort of perspective; things that were possible or not, get used to the kind of
humor that is quite terrible in these manuals; accept that certain things that
we thought would be easy, were actually not easy at all; and then understand
323

Performing Libre Graphics

how we could use the things that were popping up or not working or that
were different, how we could use them in our advantage. The final result
is a book that is slightly strange, because there are some mistakes that have
been left in, deliberately or by accident sometimes. The book contains an
extensive description of how it was made. Both visually, like it explains the
technical details of how it was made, but also the description of that learning
process. Another example of how tools, practice and outcomes are somehow
connected, but also the whole politics around it, because often these projects
are also ways of teasing out; ways licences, practice and tools somehow interact, is a project called ‘Sans Guilt’. It is a play with the ‘Gill Sans’ which
is a famous classic typeface that is claimed to be owned by a company called
Monotype. But according to our understanding, they have no right to actually claim this typeface as such. But through their communication they do
so. OSP was invited to work in an art academy in London, where they had
a lead version. And we decided to play with the typeface. The typeface OSP
released has many different versions, not versions as in bold, light etc. but
it has different levels of ‘licencing risk’. One is a straight scan of the prints
that were made at that workshop. Another version is more guilty, in the
sense that it is an extraction from a .pdf using the Monotype Gill. Another
is a redrawn version that takes the matrix, the spacing of a Monotype Gill,
but combines it with a redrawn example. All different variations of this font
touch on different elements of licencing problems that might occur with
typefaces. We sent our experiment to Monotype, because we wanted to hear
from them what they thought. After a few months we received a letter from
a lawyer saying, would you please identify yourself. We decided to write
back as we are, which is, 25 people from 20 different countries with stable
and unstable addresses. This long list probably made that we never heard
anything again, and ‘Sans Guilt’ is still available from our website under an
open font licence. What the is important, the typeface is different, in the
sense that the specimen is not much about showing off how beautiful it will
look in any context, but has the description of the process, the motivation
of why we did it, the letter we sent to Monotype, the response we got, ...
The whole packaging of the font becomes then a way of speaking about all
these layers that are in our practice.

324

Performing Libre Graphics

Libre fonts

A very exciting part of Libre Graphics is the Libre Font movement, which
is strong and has been strong for a long time. Fonts are the basic building
blocks of how graphics come to life. When you type something, it is there.
And the fact that that part of the work is free, is important on many levels.
Things you often don’t think about when you speak English and you stay
within a limited character set, is that, when you live in let’s say India, the
language you speak is not available as a digital typeface, meaning that when
you want to produce a book in the tools that are available or publish it
online, your language has no way of expressing itself. That has to do with
commercial interests, laws, ways the technical infrastructure has been built.
By understanding that it is important that you can express yourself in the
language and with the characters you need, it is also obvious that that part
needs to be free. Fonts are also interesting because they exist on many
levels. They exist in your system; they’re almost software because they’re
quite complicated objects; they appear in your screen, they are when you
print a document; they are there all the time. We consider the alphabet as
a totally accessible and available and a universal right to have the alphabet
at our disposal. So it is about ‘freeing the A’, you know. That’s quite a
beautiful energy. I think that has made the Libre Font movement very
strong. Something that has happened the last years and brings up new
problems and potential areas to work on, is fonts available for the web.
Web fonts have really exploded the amount of free fonts available. Before,
fonts were always, let’s say, when they were used, tied to a document, and
there was some kind of fantasy about that you could hold them, you could
somehow contain them, licence them and keep them in check. With the
web that idea has gone. And many people have decided to liberate their
fonts to be able to make them usable for a website. Because if you think
about it, if you use a font on a website, it means that it has to be able to
travel everywhere. Everyone has to be able to look at what the font does,
but it is not just an output. It is not just an endpoint. The font is active,
it means it is available. In theory, any font that appears on the web is both
display and program. By displaying the page, you need to run the font.
That means the font needs to be available as a source and as a result. That
means you have to publish your font. This has really created a big boom in
the last few years in Free Fonts, because that is the easiest way to deal with
that problem: allow people to download these fonts, but in a way that keeps
authorship clear, that keeps genealogy clear, and also propagates then the
possibility of making new fonts based on someone else’s work.
325

Performing Libre Graphics

Free artifacts / open standards

It took me a while to figure this out. For me it was obvious that if you would
use Free Software, you would produce free artifacts. It seems obvious, but it
is not at all the case. There is full-fledged commercial production happening
with these tools. But one thing that keeps the results, the outcomes of these
projects freer than most commercial tools, is that there is really an emphasis
on open document formats. That is extremely important, because first of
all, it is very obvious that the documents that you produce with the tool,
should not belong to the software vendor. They are yours. And to be able
to own your own documents, you need to be able to inspect how they’re
produced. I know many tragic stories of designers that lost documents
because they could never open them again. There is really an emphasis
and a lot of work on making sure that the documents produced from these
tools remain ‘inspectable’, are documented, that either you can open them
in another tool or could develop a tool to have these files available for you.
It is really part and parcel of Free Software culture, that you care about that
what generates your artifact, but also the materiality of your artifact. Open
standards are important. Or maybe let’s say it is is important that file formats
are documented and can be understood. What is interesting to see is that in
this whole Libre Graphics world there is also a strong tradition of reverse
engineering, document activism, I would call it. They claim: documents need
to be free, and we will risk breaking the law to be able to understand how nonfree documents actually are constructed. They are really working on trying to
understand non-free documents, to be able to read them and to be able to
develop tools for them, that they can be reused and remade. The difference
between a free and a non-free document is that, for example, an InDesign
file, which is the result of a commercial product, there is no documentation
available of how this file works. This means that the only way to open the
document, is with that particular program. It means there is a connection
between that what you’ve made and the software you used to produce it. It
also means that if the software updates or the licence runs out, you will not
have access to your own file. It means it is fixed. You can never change it
and you can never allow anyone else to change it. An open document format
has documentation. That means that not only the software that created it,
is available, and in that way you can understand how it was made, but also
there is independent documentation available that whenever a project, like
a software, doesn’t work anymore, or is too old to be run, or you don’t have
326

Performing Libre Graphics

it available, you have other ways of understanding the document and being
able to open it and reuse and remake it. What is important, is that around
these open formats, you see a whole ecosystem exists of tools to inspect, to
create, to read, to change, to manipulate these formats. I think it is very
easy to see how around InDesign files this culture does not exist at all.

Sharing practise / re-learn

This way of working changes the way you learn, and therefore the way you
teach. And as many of us have understood the relation between learning
and practice, we’ve all been somehow involved in education. Many of us are
teaching in formal design or art education. And it is very clear how those
traditional schools are really not fit for the type of learning and teaching that
needs to happen around Libre Graphics. One of the problems we run into, is
the fact that validation systems are really geared towards judging individuals.
And our type of practice is always multiple. It is always about things that
happen with many people. And it is really difficult to inspire students to
work that way, and at the same time know that at the end of the day, they’ll
be judged on what they produced as an individual. In traditional education
there is always a separation between teaching technology and practice. You
have, in different ways, you have the studio practice, and then you have the
workshops. And it is very difficult to make conceptual connections between
the two. We end up trying to make that happen, but it is clearly not made
for that. And then there is the problem of hierarchies between tutor and
student, that are hard to break in formal education, just because the setup is,
even in very informal situations, that someone comes to teach and someone
else comes to be taught. And there is no way to truly break that hierarchy,
because that is the way a school works. For years we are thinking about how
to do teaching differently or how to do learning differently, and last year, for
the first time, we organized a summer school. Just like a kind of experiment
to see if we could learn and teach differently. The title, the name of the
school is Relearn. Because the sort of relearning for yourself but also to
others, through teaching learning, has become really a good methodology,
it seems.
If I say ‘we’, that’s always a bit uncomfortable, because I like to be clear about
who that is, but when I’m speaking here, there is many ‘wes’ in my mind.
327

Performing Libre Graphics

There is a group of designers called OSP. They have started in 2006 with
the simple decision to not use any proprietary software anymore for their
work. And from that this whole set of questions and practices and methods developed. Right now, that’s about twelve people working in Brussels,
having a design practice. I am lucky to be honory member of this group.
I’m in close contact with them, but I’m not actively working with the design
group. Another ‘we’, an overlapping ‘we’, is Constant, an association for
arts and media active in Brussels since 1996. Or 1997 maybe. Our interest
is more in mixing Copyleft thinking, Free Software thinking and feminism.
In many ways that intersects with OSP but they might phrase it in a different way. Another ‘we’ is the Libre Graphics community, which is even a
more uncomfortable ‘we’. Because it includes engineers that would like to
conquer the world ... and small hyper intelligent developers that creep out
of their corners to talk about the very strange worlds they’re creating. Or
typographers that care about universal typefaces, or ... I mean there is many
different people that are involved in that world. I think for this conversation, the ‘wes’ are: OSP, Constant and the Libre Graphics community,
whatever that is.

Libre Graphics annual meeting Leipzig 2014

We worked on a Code of conduct, which is something that seems to appear
in Free Software or tech conferences more and more. It comes a bit from
US context. We have started to understand that the fact that Free Software
is free, doesn’t mean that everyone feels welcome. For long there have been
and there still are large problems with diversity in this community. The
excitement about freedom has led people to think that people that were not
there would probably not want to be there and therefore had no role to be
there. For example, the fact that there are not a lot of women active in Free
Software, a lot less than in proprietary software, which is quite painful if
you think about it. It has to do with this sort of cyclical effect of because
women are not there, they will probably not be interested, and because they’re
not interested, they might not be capable or feel capable of being active. So they
might not belong. There is also a very brutal culture of harassment, of
racist and sexist language, of using imagery that is let’s say unacceptable,
and that needs to be dealt with. Over the last two years I think, documents
328

Performing Libre Graphics

like Codes of conduct have started to come up from feminists that are active
in this world, like Geek feminism or the Ada initiative, as a way to deal
with this. And what it does, is it describes ... it is slightly pompous, in the
sense that it describes your values. But it is a way to acknowledge the fact
that these communities have a problem with harassment, first. That they
explicitly say we want diversity, which is important. That it gives very clear
and practical guidelines for what someone that feels harassed can do, who
he or she can speak to, and what will be the consequences. Meaning that
it takes away the burden, at least as much as possible, from someone that is
harassed to defend actually the gravity of the case.

Art as integrative concept

For me calling myself an artist is useful, is very useful. I’m not busy with
let’s say, the constitutional art context. That doesn’t help me, at all. But
what does help me is the figure of the artist, the kinds of intelligences that
I sort of project on myself and I use from others and my colleagues, before
and contemporary. Because it allows me to not have too many ... to be able
to define my own context and concepts, without forgetting practice. And I
think art is one of the rare places that allows this. Not only allows it, but
actually rigorously asks for it. It is really wanting me to be explicit about my
historical connections, my way of making, my references, my choices, that
are part of the situation I build. And the figure of the artist is a very useful
toolbox in itself. And I think I use it, more than I would have thought. It
allows me to make these cross connections in a productive way.

329

The making of Conversations was on many levels a process of dialogue, between people, processes, and systems.
Xavier Klein and Christoph Haag were as much involved
in editorial decisions as they were in creating an experimental platform that would allow us to produce a publication in a way true to the content of the conversations
it would contain. In August 2014 we discussed the ideas
behind their designs and the status of the systems they
were developing for the book that you are reading right
now.
I wanted to ask you Xavier, how did you end up in Germany?
It’s a long story, so I’ll make it short. I benefit from the Leonardo program, a
scholarship to do an internship abroad. So I searched for graphic design studios
that use Open Source and Free Software. I asked OSP first, but they said No.
I didn’t know LAFKON at this time, and a friend told me: Hey there is this
graphic design studio in Germany, so I asked and they said Yes. So I was
happy. ( laughs)
How did you start working on this book?

I thought it would be nice to have a project during Xavier’s stay in Augsburg
with a specific outcome. Something going beyond pure experimentation.
So I asked Constant if there were any projects that need to be worked on.
And I’m really happy with the Conversations publication, because it is a
good mixture. There is the technical experiment, how you would approach
something like this using Free Software. And there is the editing side.
To read all these opinions and reflections. It’s really interesting from the
content side, at least for me – I don’t dare to speak for Xavier. So that’s
basically how it started.
You developed a constellation of tools that together are producing the book.
Can you explain what the elements are, how this book is made?
333

We decided in the beginning to use Etherpad for the editing. A lot of
documentation during Constant events was done with Etherpad and I found
its very direct access to editing quite inspiring. Earlier this year we prepared a
workshop for the Libre Graphics Meeting, where we’d have a transformation
from Etherpad pages to a printable .pdf. The idea was to somehow separate
the content editing and the rendering. Basically I wanted to follow some
kind of ‘pull logic’. At a certain point in the process, there is an interface
where you can pull out something without the need to interfere too much
with the inner workings of this part. There is the stable part, the editing on
the Etherpad, and there is something, that can be more experimental and
unstable which transforms the content to again a stable, printable version. I
tried to create a custom markdown dialect, meant to be as simple as possible.
It should reduce to some elements, the elements that are actually needed.
For example if we have an interview, what is required from the content side?
We have text and changing speakers. That’s more or less the most important
informations.
So on the first level, we have this simple format and from there the transformation process starts. The idea was to have a level, where basically anybody,
who knows how to use a text editor, can edit the text. But at the same
time it should have more layers of complexity. It actually can get quite
complex during the transformation process. But it should always have this
level, where it’s quite simple. So just text and for example this one markup
element for ok now the speaker changes.
In the beginning we experimented with differents tools, basically small
scripts to perform all kinds of layout task. Xavier for example prepared a
hotglue2svg converter. After that, we thought, why don’t we try to connect different approaches? Not only the very strict markdown to TeX to
.pdf transformations, but to think about, under which circumstances you
would actually prefer a canvas-based approach. What can you do on a canvas
that you can’t do or is much harder with a markup language.
It seems you are developing an adhoc markup language? Is that related to
what you wrote in the workshop description for Operating Systems: 1 Using
operating systems as a metaphor, we try to imagine systems that are both
structured and open?

Yes. The idea was to have these connected/disconected parts. So you have
the part where the content is edited in collaboration and you have the transformer script running separately on the individuals’ computers. For me this
1

http://libregraphicsmeeting.org/2014/program/

334

solved in a way the problem of stability. You can use a quite elaborated,
reliable software like Etherpad and derive something from it without going
to its inner workings. You just pull the content from it, without affecting
the software too much. And you have the part, where it can get quite experimental and unreliable, without affecting all collaborators. Because the
process runs on your own computer and not on the server.
The markup concept comes from the documentation of a video streaming
workshop in Linz. There we wanted to have the possibility to write the
documentation collaboratively during the workshop and we needed also to
solve problems like How about the inclusion of images? That is where the first
markup element came from, which basically just was was a specific line of
text, which indicates ‘here should be this/that image’. If this specific line
appears in the text during the transformation process, it triggers an action
that will look for a specific file in the repository. If the image exists, it will
write the matching macro command for LaTeX. If the image is not in the
repository, it will do nothing. The idea was, that the creation of the .pdf
should happen anyway, e.g. although somebody’s repository might be not at
the latest state and a missing image would prevent LaTeX from rendering
the document. It should also ignore errors, for example if someone mistypes
the name of image or the command. It should not stop the process, but
produce a different output, e.g. without the image.
Why do you think the process should not stop when there’s an error? Why is
that so important?

For me it was important to ensure some kind of feedback, even if there might
be ‘errors’ in the output. Not just ‘not work’. It can be really frustrating,
when the first thing you have to do, is to find and solve a problem – which
can be quite hard with this sort of unprofessional scripts – before there’s is
happening anything at all. So at a certain point, at least something should
appear, even if it’s not necessarily the way it was originally intended. Like
a tolerance for errors, which would even produce something, that maybe
different from what you expected. But it should produce ‘something’.
You imagine a kind of iterative development that we know from working with
code, that allows you to keep differents versions, that keeps flowing in a way.
For example, this specific markup format. It’s basically markdown and
I wanted some more elements, like footnotes and the option to include
citations and comments. I find it quite handy, when you write software,
335

that you have the possibility to include comments that are not part of the
actual output, but part of the working process. I also enjoy this while
writing text (e.g. with LaTeX), because I can keep comments or previous
versions or drafts. So I really have my working version and transform this
to some kind of output.
But back to the etherpash workshop. Commands are basically comments
that will trigger some action, for example the inclusion of a graphic or
changing the font or anything. These commands are referenced in a separate
file, so everybody can have different versions of the commands on their own
machine. It would not affect the other people. For example, if you wanted
to have a much more elaborated GRAFIK command, you could write it and
use it within your transformer of the document or you could introduce new
commands, that are written on the main pad, but would be ignored for
other people, because they have a different reference file. Does this make
sense?
Yes. In a way, there are a lot of grey zones. There are elements that are
global and elements that are local; elements can easily go parallel and none
of the commands actually has always the same output, for everyone.

They can, but they do not need to. You can stick to the very basic version
that comes directly from the repository. You could use this version to create
a .pdf in the ‘original’ way, but you can easily change it on different levels.
You can change the Bash commands that are triggered by the transformer
script, you can work on the LaTeX macros or change the script itself. I
found it quite important to have different levels of complexity. You may go
deeper, but you do not necessarily have to. The Etherpad content is the very
top level. You don’t have to install a software on your computer, you can
just open up a browser and edit the text. So this should make the access to
collaboration easier. Because for a lot of experimental software you spend a
lot of time to get it even running. Most often you have a very steep learning
curve and I found it interesting, to separate this learning curve in a way. So
you have different layers and if you really want to reconfigure on a deep level,
you can, but you do not necessarily have to.
I guess you are talking about collaboration across different levels of complexity, where different elements can transform the final outcome. But if you
take the analogy of CSS, or let’s say a Content Management System that
generates HTML, you could say that this also creates divisions of labour. So
rather than making collaboration possible, it confines people to to different
336

files. How do you think your systems invite people to take part in different
levels? Are these layers porous at all? Can they easily slip between different
roles, let’s say an editor, a typographer and a programmer?
Up to a certain extent it’s like a division of labour. But if you call it a
separation of tasks, it makes definitely sense for me. It can be quite hard, if
you have to take over responsability for everything at the same time. So it
makes sense for me, also for collaboration, to offer this separation. Because
it can be good to have the possibility not to have to deal with the whole
system and everything at the same time. You should be able to do so, but
you should not necessarily have to. I think this is important, because a lot
of frustration regarding Free Software systems comes from the necessity to
go to the deep level at an early stage. I mean it’s an interesting problem.
The promise of convenience is quite hard, because most times is does not
really work. And it’s also fine that it doesn’t really work. At the same time
it’s frightening for people to get into it and so I think, it’s good to do this
step by step and also to have an easy top level opportunity to go into, for
example, programming. This is also a thing I became really interrested in.
The principle of the commandline to ‘extend usage into programming’. 2
You do not have to have a development environment and then you compile
software and then you have software, but you have this flexible interface for
your daily tasks. If you really need to go a deeper level, you can, at least with
Free Software. But you don’t have to ... compile your kernel every time.

Not every time! What I find interesting about your work is that you prefer not
to conceal any layers. References, commands, markup hints at the existence
of other layers, and the potential to go somewhere else. I wanted you to ask
about your fascination or interest in something ‘old school’ as Bash scripting.
Why is it so interesting?

Maybe at first point, it’s a bit of a fascination for the obscure. That normally,
as a graphic designer you wouldn’t think of using the commandline for your
work. When I started to use GNU/Linux, I’d try to stay away from the terminal. Which is basically, as I realised pretty soon, not possible. 3 At some
point, Bash scripting became really fascinating, because of the possibility to
use automation to correct or add functionalities. With the commandline
it’s easy to automate repetitive tasks, e.g. you can write a small script that
2
3

Florian Cramer. (echo echo) echo (echo): Command Line Poetics, 2007
let’s say hard

337

creates a separate .svg file for each layer in a .svg file 4 , convert this separated .svg files to .pdf files 5 and combine the .pdf files to a multipage
.pdf 6 . Just by collecting commands you’d normally type on your commandline interface. So in this case, automation helps to work around a missing
multipage support in inkscape. Not by changing the application itself, but
by plugging something ‘on top’ of it. I like to think of the Bash as glue
between different applications. So if we have a look now at the setup for
the conversations publication, we may see that Bash makes it really easy to
develop own configurations and setups. I actually thought about prefering
the word ‘setup’ to ‘writing software’ ...

Are you saying you prefer setup ‘over’ configuration?

Setup or configuration of software ‘over’ actually writing software. Because
for me it’s often more about connecting different applications. For example,
here we have a browser-based text editor, from which the content is automatically pulled and transformed via text-transform tools and then rendered
as a .pdf. What I find interesting, is that the scripts in between may actually be not very stable, but connect two stables parts. One is the Etherpad,
where the export function is taken ‘as is’ and you’ve got the final state of a
.pdf. In between, I try to have this flexible thing, that just needs to work
at this moment, in my special case. I mean certain scripts may reach quite
an amount of stability, but not necessarily. So it’s very good to have this
fixed state at the end.

You mean the .pdf?

I mean the .pdf, because ... These scripts are quite personal software and
so I don’t really think about other users beside me. For me it’s a whole
different subject to go to the usability level. That’s maybe also a cause for
the open state of the scripts. It would not make much sense – if I want to
have the opportunity for other people to make use of these things – to have
black boxes. Because for this, they are much too fragile. They can be taken
over, but there is no promise of ... convenience? 7 And it’s also important
for myself, because the setups are really tailored to a specific use case and
4
5
6
7

using sed, stream editor for filtering and transforming text
using inkscape on the commandline
using pdftk
... distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without
even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. Free Software Foundation. GNU General Public License, 2007

338

therefore more or less temporary. So I need to be able to read and adapt
them myself.
I know that you usually afterwards you provide a description of how the collage
was made. You publish the scripts, and sketches and intermediary outcomes.
So it seems that usability is more in how you give access to the process rather
than the outcome. Or would you say that software is the outcome?

Actually for me the process is the more interesting part of the work. A lot of
the projects are maybe more like a proof of concept, than finished pieces of
software. I often reuse parts of these setups or software pieces, so it’s more
collections of ‘How to do something’ then really a finished thing, that’s now
suitable to produce this or that.
I’m just wondering, looking at your designs, if you would like that layering,
this unstability to be somehow legible in the .pdf or the printed object?

I don’t think that this unstability is really legible. Because in the process
there’s a certain point where definitive decisions are taken. It’s also part of
the concept. You make decisions and that make the final state of the object
what it is. And if you want to get back to the more flexible part, then you
would really have to get back. So I don’t actually think that it is legible in
the final output, on the first sight, that it is based on a very fluid working
process. And for me that’s quite ok. It’s also important for me – because
I tend not to do so – to take a decision at a certain point. But that’s not
necessarily the ultimate decision and therefore it’s also important to keep
the option open to redefine ... ‘the thing’.

What you’re saying, is that you can be decisive in your design decisions because
the outcome could also be another. You could always regenerate the .pdf
with other decisions.
Yes. For example, I would regenerate the .pdf with the same decisions,
another person maybe would take different decisions. But that’s one step
before the final object. For example, if we do not talk about the .pdf, but
we actually talk about the book, then it’s very clear, that there are decisions,
that need to be taken or that have been taken. And actually I like the feeling
of convenience when things get finished. They are done. Not configurable
forever.

( laughs) That’s convenient, if things get done!
339

For this specific book, you have made a few decisions, for example your selection of fonts is particular.
Xavier, can you say something about the typography of Conversations?

Huuumn yep, for the typographic decisions ... in the beginning we searched for
fancy fonts, but in a way came back to use very classic fonts, respectively one classic
font. So the Junicode 8 for the text and the OCR-A 9 for anything else. Because
we decided to focus on testing different ways of layouting and use the fonts as a
way to keep a certain continuity between the parts. We thought this can be more
interesting, than to show that we can find a lot of beautiful, fancy fonts.

So in the beginning, we thought about having a different font for every
speaker, but sooner or later we realised that it would be good to have something that keeps the whole thing together. Right now, this are the two
fonts. The Junicode, which is a font for medievalists, and the OCR-A,
which is a optical character recognition font from the early age of computer technology. So the hypothesis was, to have this combination – a very
classical typeface inspired by the 16th century and a typeface optimized for
machine reading – that maybe will produce an interesting clash of two different approaches. While at the same time providing a continuous element
throughout the book. But that still has to be proven in the final layout.

I find it interesting that both fonts in their own way are somehow conversational. They are both used in situations where one system needs to talk to
another.

Yeah, definitely in a way. They are both optimised for a special usage, which,
by the way, isn’t the usage of our case. One for the display of medieval
texts, where you have to have lot of different signs and ligatures and ... that’s
the Junicode. The other one, the OCR-A, is optimized to be legible by
machines. So that are two different directions of conversation. And they’re
both Free and Open Source fonts ...
And for the layout? How are the divider pages going to be constructed?

For the divider pages, it’s an application ‘Built with Processing’, done by
Benjamin 10 . In a way, it’s a different approach, because it’s a software with
an extensive Graphical User Interface, with a lot of options. So it’s different
8
9

10

http://junicode.sourceforge.net/
http://sourceforge.net/projects/ocr-a-font/
Stephan

340

from the very modular, connective approach. There we decided to have this
software, which is directly controlled by the controller, the person who uses
it. And again, there is this moment of definitive decision. Ok, this is exactly
how I want the title pages to look. And then they are put in a fixed state.
At the same time, the software will be part of the repository, to be usable
as a tool. So it’s a very ... not a ‘very classic’ ... approach. To write ‘your’
software for ‘your’ very specific use case. In a more monolithic way ...
Just to add this. In this custom markdown dialect, I decided at a point
to include a command, which is INCLUDEPAGES, where you can provide
a .pdf file via an url to be included in the document. So the .pdf may
be stored anywhere, as long as it is accessible over the internet. I found
this an interesting opportunity for collaboration. Because if somebody does
not want to stick to the grid given by the LaTeX configuration or to this
kind of working in general, this person could create a .pdf, store it online,
reference it and the file will be included. This can be a very disconnected
way of contributing to the final book. And that’s also a thing we’re now
trying to test ourselves. Because in the beginning we developed a lot of
different little scripts, for example the hotglue2svg converter. And right
now we’re trying to extend this. For example, to create one interview in
Scribus and include the .pdf made with Scribus. To also test ourselves
different approaches.
This book will be both a collage and have a overall, predefined structure
provided by the lay-out engine?

I’m trying to make pragmatic use of the functionalities of LaTeX, which is
used for the final compiling of the .pdf. So for example, also ready-made
.pdf files included into the final document are referenced in the table of
contents.

Can you explain that again ?

Separate .pdfs, that are included into the final document will be referenced
in the table of contents. We can still make use of the automatic generation
of page numbers in the table of contents, so there it goes together. There
are certain borders, for example since the .pdfs are more like finished documents, indexing will probably not work. Because even if you can extract
references from the .pdf, I didn’t find a way until now, how to find out the
page number in a reliable way. There you also realise, that you can do much
more with the plain text sources than you can do with a finished document.
341

But I think that’s ok. In this case you wouldn’t to have a keyword reference
to the .pdf, while it’s still in the table of contents ...
What if someone would want to use one of these interviews for something else?
How could this book becoming source for an another publication?
That’s also an advantage of the quite simple source format on the Etherpad.
It can be easily converted to e.g. simple markdown, just by a little script.
I found this quite important – because at this point we’re putting quite an
amount of work into the preparation of the texts – to have it not in a format
that is not parseable. I really wanted to keep the documents transformable
in a easy way. So now you could just have a ~fiveliner, that will pull the text
from the Etherpad and convert it to simple markdown or to HTML.
Wonderful.

If you have a more or less clean source format, then it’s in most cases easy
to convert it to different formats. For example, the Evan Roth interview,
you provided as a ConTeXt file. So with some text manipulation, it was
easy to do the transformation to our Etherpad markup. And it would be
harder if the content is stored as an Open Office document, but still feasible.
.pdf in a way is the worst case, because it’s much harder to extract usable
content again, depending on the creator. So I think it’s important to keep
the content in a readable and understandable source format.

Xavier, what is going to happen next?

Right now, I’m the guy who tests on Scribus, Inkscape. But I don’t know if it’s
the answer to your question.

I was just curious because you have a month to work on this still, so I was
wondering ... are there other things you are testing or trying ?

Yeah, I think I want to finish the hotglue2svg.sh, I mean it’s my first
Bash program, I want to raise my baby. ( laughs) But right now I’m trying to
find different ways of layouts. The first one is the one with the big squares, the
big unicode characters and all the arrows. So it’s very complicated, but it’s the
attempt to find an another way to express a conversation in text.

Can you say more about that ?

Because in the beginning, my first try was to keep the ‘life’ of a conversation in
the text with some things, like indentation or with graphic things, like the choice
342

of the unicode characters. If this can be a way to express a conversation. Because
it’s hard to it with programming stuff so we’re using GUI based software.

It’s a bit coming to the question, what you are doing differently, if you work
with a direct visual feedback. So you don’t try to reduce the content to get
it through a logical structure. Because that’s in a way how the markdown
to LaTeX transformation is doing it. You set certain rules, that may be in
special cases soft rules, but you really try to establish a logical structure and
have a set of rules and apply them. For me, it’s also an interesting question.
If you think of grid based graphic design, where you try to introduce a set
of rules in the beginning and then to keep for the rest of the project, that’s
in a way a very obvious case for computation. Where you just apply a set of
rules. With this application of rules you are a lot confronted in daily graphic
design. And this is also a way of working you learn during your studies.
Stick to certain logical or maybe visual grids. And so now the question is:
What’s the difference if you do a really visual layout. Do you deal differently
with the content, does it make sense, or if you’re just always coming back
to a certain grid, then you might as well do it by computation. So that’s
something that we wanted to find out. What advantage do you really gain
from having a canvas-based approach throughout the layout process.
In a way the interviews are very similar, because it’s always peoples speaking,
but at the same time each of the conversations is slightly different. So in what
way is the difference between them made legible, through the same set of rules
or by making specifics rules for each of them?
If you do the layout by hand you can take decisions that would be much
harder to translate to code. For example, how to emphasize certain part
of the text or the speaker. You’re much closer to the interpretation of the
content? You’re not designing the ruleset but you are really working on the
visual design of the content ... The point why it’s interesting to me is because
working as a designer you get quite often reduced to this visual design of the
content, at the same it may make sense in a lot of cases. So it’s a evaluation
of these different approaches. Do you design the ruleset or do you design
the final outcome? And I think it has both advantages and disadvantages.

343

Colophon

In conversation with: Agnes Bewer, Alexandre Leray, An Mertens, Andreas Vox, Asheesh
Laroia, Carla Boserman,Christina Clar, Chris Lilley, Christoph Haag, Claire Williams, Cornelia
Sollfrank, Dave Crossland, Dmytry Kleiner, Denis Jacquery, Dmytri Kleiner, Eleanor Greenhalgh,
Eric Schrijver, Evan Roth, Femke Snelting, Franziska Kleiner, George Williams, Gijs de Heij,
Harrisson, Ivan Monroy Lopez, John Haltiwanger, John Colenbrander, Juliane De Moerlooze,
Julien Deswaef, Larisa Blazic, Ludivine Loiseau, Manuel Schmalstieg, Matthew Fuller, Michael
Murtaugh, Michael Terry, Michele Walther, Miguel Arana Catania, momo3010, Nicolas Malevé,
Pedro Amado, Peter Westenberg, Pierre Huyghebaert, Pierre Marchand, Sarah Magnan, Stéphanie
Vilayphiou, Tom Lechner, Urantsetseg Ulziikhuu, Xavier Klein

Concept, development and design: Christoph Haag, Xavier Klein, Femke Snelting

Editorial team: Thomas Buxó, Loraine Furter, Maryl Genc, Pierre Huyghebaert, Martino Morandi
Transcriptions: An Mertens, Boris Kish, Christoph Haag, Femke Snelting, George Williams, Gijs
de Heij, ginger coons, Ivan Monroy Lopez, John Haltiwanger, Ludivine Loiseau, Martino Morandi,
Pierre Huyghebaert, Urantsetseg Ulziikhuu, Xavier Klein
Chapter opener: Built with petter by Benjamin Stephan
-> http://github.com/b3nson/petter

Tools: basename, bash, bibtex, cat, Chromium, cp, curl, dpkg, egrep, Etherpad, exit,
ftp, gedit, GIMP, ghostscript, Git, GNU coreutils, grep, ImageMagick, Inkscape, Kate, man,
makeindex, meld, ne, pandoc, pdflatex, pdftk, Processing, python, read, rev, Scribus,
sed, vim, wget
Fonts: Junicode by Peter S. Baker, OCR-A by John Sauter

Source Files:
Texts, fonts and pdf: http://conversations.tools
Software: https://github.com/lafkon/conversations
Published by: Constant Verlag (Brussels, January 2015)
ISBN: 9789081145930

Copyright (C) Constant 2014
Copyleft: This work is free. You may copy, distribute and modify
it according to the terms of the Free Art License (see appendix)
This publication is made possible by the Libre Graphics Community, through the financial support
from the European Commission (Libre Graphics Research Unit) and the Flemish authorities.

Printed in Germany.

http://www.online-druck.biz

351

Acid Test, 145–147
Activism, 302, 320, 326
Adafruit, 225
Adobe Illustrator, 66, 101, 159–161, 292
Adobe InDesign, 15, 16, 19, 326, 327
Adobe PageMaker, 16, 17, 159, 160
Adobe Photoshop, 279, 280
Adobe Systems, 8, 24, 101, 142, 156,
157, 159–162, 279, 291,
297, 302
Algorithm, 227, 236
Amado, Pedro, 275
Anthropology, 41, 202, 232
AOL Inc., 25
Apple Inc., 8, 23, 24, 142, 159–162
Application Programming Interface, 118,
276
Arana Catania, Miguel, 88
Arduino, 83, 226
Artist, 7–9, 17, 73, 99–101, 146, 190,
191, 213–215, 223, 224,
240, 247, 319, 323, 329

Bézier, Pierre, 303
Baker, Peter S., 351
Barragán, Carlos, 88
Beauty, 14, 23, 32, 47, 55, 59, 78, 81,
162, 176, 230, 236, 268,
293, 305, 324, 325, 340
Benkler, Yochai, 187, 192, 193
Bewer, Agnes, 37
Blanco, Chema, 90
Blazic, Larisa, 7
Blender, 55, 72, 221, 222, 276
Blokland, Petr van, 158, 159
Body, 39, 77, 135, 141, 146, 178, 219,
242
Boserman, Carla, 86
Bradney, Craig, 13, 16, 80
Brainch, 110
Brainerd, Paul, 159
Brussels, 3, 37, 71, 187, 195, 203, 213,
245, 248, 287, 303, 328,
351
Buellet, Stéphane, 215
Bug, 17, 23, 25, 66, 119, 171, 172, 201,
203–205, 292, 298, 302

Bugreport, 280
Bush, George W., 290
Buxó, Thomas, 351

Canvas, 13, 17, 57, 58, 63, 65, 66, 301,
334
Carson, David, 161
Cayate, Henrique, 278
Chastanet, François, 233
Clar, Christina, 99
Colenbrander, John, 99
Collaboration, 3, 7, 9, 57, 100, 101, 109–
112, 116–120, 126, 127,
160, 162, 203, 213, 215,
223, 224, 232, 244, 246,
253, 275, 289, 290, 292,
301, 311, 334, 336, 337,
341
Commandline Interface, 39, 59, 298,
299, 336–338, 342, 351
Commons, 192–194
Communism, 187, 192–194
computer department, 275
Constant, 3, 99, 109, 124, 137, 171, 213,
246, 283, 311–313, 328,
333, 334, 351
ConTeXt, 42, 47–55, 57–62, 66, 67, 103,
127, 128, 155, 181, 182,
191, 192, 261, 276, 278,
300, 304, 311–314, 320–
324, 328, 329, 342
Contract, 189
coons, ginger, 351
Copyleft, 162, 276, 328
Creative Commons, 18, 27, 218, 244,
249, 250, 321
Crossland, Dave, 29, 92, 155, 351
CSS, 53, 54, 116, 117, 142, 144–146, 336

Dahlström, Erik, 138
Dance, 64, 81, 219
de Heij, Gijs, 351
de Moerlooze, Juliane, 37
Debian, 3, 37, 38, 40, 41, 100, 156, 201,
203, 205, 207
Designer, 3, 7–9, 16, 17, 23, 28, 99, 114,
115, 135, 140, 142, 146,

147, 149, 150, 155, 158,
160, 163, 164, 174, 187–
190, 193, 194, 227, 235,
261, 262, 266, 267, 275,
278, 279, 282, 288, 292,
297, 299–301, 304, 305,
311, 323, 326, 328, 343
Desktop Publishing, 9, 61, 159–161,
276, 279
Deswaef, Julien, 88
Developer, 3, 7–9, 13–15, 17, 19, 23, 40,
47, 49, 54, 55, 58, 59, 71,
74, 99, 102, 104, 105, 112,
115, 123, 128, 135, 149,
150, 155, 162, 166, 171,
174, 177, 179, 183, 190,
196, 201, 203, 204, 207,
208, 213, 215, 216, 225,
233, 235, 254, 261, 265,
279, 299–302, 306, 328,
337
Documentation, 27, 43, 51, 52, 54, 55,
57, 60, 176, 208, 230–232,
238, 239, 264–266, 307,
313, 326, 334, 335
Dropbox, 118, 128
Duffy, Maírín, 206

Education, 8, 42, 43, 100, 165, 166, 248,
275, 276, 279, 282, 327
Efficiency, 41, 43, 75, 78, 206, 289, 297,
306
Egli, Simon, 292
Ehr, Jim von, 160, 161
Emmons, Andrew, 138
Encoding, 24, 261, 262, 264–267
ePub, 105
Etherpad, 117, 118, 334–336, 338, 342,
351
EyeWriter, 214, 223–225, 227, 228, 235,
236
Farhner, Todd, 145
Feminism, 37, 41, 328, 329
Firefox, 144, 177, 283
Flash, 101, 208, 215, 279

FontForge, 23, 25–27, 29, 30, 32,
165, 166, 268, 276,
298, 299
FontLab, 28, 162, 163, 276
Fontographer, 24, 160–163, 292
Free Art License, 244, 351, 354
Free Culture, 7, 8, 13, 102–104,
201, 319–322
Freeman, Mark, 203
Fried, Limor, 225
FrontPage, 25
Frutiger, Adrian, 293
Fuller, Matthew, 297
Fun, 14, 15, 49, 57, 65, 67, 72, 78,
217, 227, 232, 235,
238, 246, 253
Furter, Loraine, 351

163,
292,

165,

206,
236,

Gaulon, Benjamin, 223
Genc, Maryl, 351
Gender, 9, 47, 48, 201, 204, 205, 302
Ghali, Jean, 80
GIMP, 171, 172, 174, 179–183, 276, 279,
280, 298, 299, 351
Git, 57, 109–121, 123–125, 127–129,
203, 313, 320, 351
GitHub, 7, 111, 116, 120–124, 126, 128
Gitorious, 111, 116, 121, 122, 124
Glitch, 305
Glyph, 31, 48, 120, 121, 165, 262, 266,
268, 291, 314
Gnu General Public License, 219, 253,
305, 321, 338
Goller, Ludwig, 304
Google Summer of Code, 205, 206
Graphic Design, 7, 9, 111, 113, 115, 116,
119, 156, 159, 161, 162,
175, 227, 280, 287, 297,
298, 306, 311, 312, 321,
333, 343
Graphical User Interface, 14, 29, 73,
159–161, 300, 301, 340,
343
Greenhalgh, Eleanor, 90, 99
Haag, Christoph, 99, 333, 351

Hagen, Hans, 47–50, 55, 56
Haltiwanger, John, 47, 213, 351
Hannemeier Hansson, David, 252
Harrington, Bryce, 301
Harrison, 155, 187, 287
Hello World, 235
Hickson, Ian, 146
HTML, 24–27, 48, 52–54, 116, 137,
138, 141, 149, 175, 319,
336, 342
Hugin, 82
Huyghebaert, Pierre, 48, 58, 109, 135,
155, 289, 298, 351

Imposition, 73, 75, 76, 80, 81, 83, 312
Infrastructure, 27, 50, 160, 172, 173,
180, 325
Inkscape, 66, 72, 117, 143, 205, 276,
298–301, 338, 342, 351
Internet Explorer, 142, 144, 197, 283
Internet Relay Chat, 19, 138, 203, 206,
208, 276, 300
iPhone, 226, 230, 238
IT Department, 8, 156, 275
Jacquerye, Denis, 165, 261
Jay-Z, 252
Jenkins, Mark, 220
Joint Photographic Experts Group, 128
Juan Coco, Mireia, 94

Karow, Peter, 159
KATSU, 220, 249
Kerning, 31, 52, 299
Kish, Boris, 351
Klein, Xavier, 333, 351
Kleiner, Dmytri, 187
Kleiner, Franziska, 187
Knuth, Donald, 51, 54, 80, 158, 300,
312, 323
Kostrzewa, Michael Dominic, 149
KRS-One, 251

Labour, 183, 187–190, 192–194, 197,
299, 336, 337
LAFKON Publishing, 333

Laidout, 71–73, 75, 78–80, 82, 83
Laroia, Asheesh, 201
LaTeX, 49–51, 60, 66, 290, 335, 336,
341, 343
Laughing, 23, 25–27, 31, 38, 56, 64,
74, 79, 139, 144, 146, 189,
194, 196, 204, 208, 216,
220, 221, 224, 227, 230,
232, 233, 240, 246, 254,
265, 266, 268, 305, 333,
339, 342
Lawyer, 136, 146, 149, 192, 324
Lechner, Tom, 71
Lee, Tim Berners, 139
Leray, Alexandre, 109
Levien, Raph, 302
Libre Fonts, 196, 275, 287, 299, 324, 325
Libre Graphics Meeting, 3, 7, 8, 13,
23, 71, 110, 135, 149, 150,
155, 171, 181, 201, 208,
314, 319, 328, 334
Libre Graphics Research Unit, 3, 109,
261, 314, 351
Lilley, Chris, 135
Linnell, Peter, 13, 17, 18
Loiseau, Ludivine, 71, 109, 155, 311,
351
Lua, 50–52, 59, 60
Lupton, Ellen, 304
Müller Brockman, Joseph, 305
Macromedia, 24, 101, 137, 161
Magnan, Sarah, 109
Mailing list, 40, 41, 47, 50, 162, 202,
205, 263, 299, 300, 322
Malevé, Nicolas, 135, 261
Mansoux, Aymeric, 8
Manual, 43, 51, 56, 60, 61, 157, 201, 299,
307
Marchand, Pierre, 58, 71, 109, 261, 311,
351
Marini, Anton, 247
Markdown, 52, 53, 105, 247, 334, 335,
341–343
Markup, 52, 53, 213–215, 222, 224, 237,
251, 334, 335, 337, 342
Marx, Karl, 187, 188

Mathematics, 26, 37, 39, 40, 42, 43, 71,
72, 155, 158
Mauss, Marcel, 187, 195
MediaWiki, 173, 181
Mercurial, 110
Meritocracy, 126
Mertens, An, 37, 351
Metafont, 158
Microsoft, 16, 18, 24, 25, 56, 57, 144,
150, 162, 197, 276, 283
Monotype, 324
Monroy Lopez, Ivan, 111, 171, 351
Morandi, Martino, 351
Moskalenko, Oleksandr, 13
Multiple Master, 291, 292
Murtaugh, Michael, 99
Netscape, 142

Open Font Library, 27
OpenOffice, 117, 342
Opera, 138, 144
OSP, 3, 57, 81, 109–112, 114, 120, 122–
126, 128, 135, 155, 187,
207, 227, 268, 287, 297,
298, 302, 303, 305–307,
311–313, 324, 328, 333
OSPimpose, 312
Otalora, Olatz, 94

Pérez Aguilar, Ana, 94
PDF, 14, 18, 52, 72, 73, 122, 128, 129,
156, 298, 307, 312, 324,
334–336, 338, 339, 341,
342
Peer production, 187, 189, 191, 192, 194,
195, 197, 288
PfaEdit, 25
Pinhorn, Jonathan, 314
Piracy, 15, 287–289
Pixar, 197
Plain Text, 80, 140, 341
Podofoimpose, 71, 80, 81
Police, 150, 179, 215, 223, 239–241
PostScript, 18, 24, 25, 27, 159–162
Printing, 14, 15, 17, 18, 23, 24, 53, 72,
76, 77, 83, 103, 129, 148,

158–161, 223, 234, 247,
263, 275, 279, 298, 300–
302, 304, 305, 314, 324,
325
Problems, 28, 39, 42, 43, 47, 48, 80–82,
104, 111, 121, 122, 128,
137, 144, 157, 187, 193,
195, 196, 201, 203, 205,
217, 219, 226, 229, 233,
239, 242, 265, 277, 289,
300, 327, 329, 335, 337
processing.org, 67, 247, 276, 297, 340
Public Domain, 218, 221, 250, 282, 303
Qt, 78
QuarkXpress, 15, 161, 196, 290

Recipe, 125, 127, 128, 306, 307, 320, 321
Relearn Summerschool, 109, 327
Release early, release often, 114, 221
Robofog, 161
Robofont, 161
Rossum, Just van, 161
Roth, Evan, 213

Safari, 144
Samedies, 37, 40, 203
Sauter, John, 351
Schmalstieg, Manuel, 311
Schmid, Franz, 13, 15, 17
Schrijver, Eric, 109
Scribus, 13–19, 57, 61, 62, 65, 71, 79–81,
113, 115, 128, 157, 187,
196, 197, 276, 297–302,
313, 341, 342, 351
Scribus file, 113, 119
Sexism, 40, 328
Shakespeare, William, 23, 25, 26
Sikking, Peter, 172
Smythe, Dallas, 190
Snelting, Femke, 3, 297, 319, 351
Sobotka, Troy James, 227
Sollfrank, Cornelia, 319
SourceForge, 111
Sparkleshare, 118
Spencer, Susan, 92

Stable, 51, 58, 324, 334, 335, 338
Stallman, Richard, 165
Standards, 17, 101, 135, 136, 138, 140,
141, 145–147, 223, 250,
262, 291, 293, 298, 303,
304, 326
Stephan, Benjamin, 340, 351
Stroke, 65, 214, 216, 234, 243, 248, 291
Subtext, 47, 53, 54, 56, 57, 61
Sugrue, Chris, 214
SVG, 119, 135, 136, 138, 140, 143–145,
148, 215, 298, 301, 334,
338, 341
SVN, 111, 112, 117, 120, 320
Telekommunisten, 187
TEMPT, 214, 223, 224, 235, 236
Terry, Michael, 171
TeX, 47–49, 51, 52, 55, 59, 60, 80, 158,
161, 312, 323, 334
Torrone, Phil, 225
Torvalds, Linus, 112, 114, 115, 118, 246,
252
Tschichold, Jan, 52, 57
Tucker, Benjamin, 187, 189, 192
Typesetting, 24, 51–55, 57, 60, 61, 66,
158, 287, 323
Typography, 3, 9, 16, 24, 48, 51, 53,
61, 117, 155–159, 161–
165, 187, 195, 196, 220,
235, 261, 276, 287–291,
293, 298–300, 304, 314,
340
Ubuntu, 102
Ulziikhuu, Urantsetseg, 99, 351
Undocumented, 50
Unicode, 23, 24, 26, 27, 48, 261–268,
342, 343
Universal Font Object, 163
Unstable, 324, 334
User, 3, 9, 13–17, 19, 25, 32, 37, 47, 49,
50, 52, 54–56, 58, 64, 79,

100–102, 104, 141, 146,
159, 160, 166, 171–177,
179, 181, 182, 196, 208,
215, 222, 261, 266–268,
279, 280, 283, 288, 289,
300–302, 319, 338, 340
Utopia, 100, 251

Veen, Jeff, 146
Version Control, 7, 57, 109–112, 116–
119, 123–125, 127, 144,
149, 201, 202, 207, 264,
300, 305, 307
Vilayphiou, Stéphanie, 109, 213
Visual Culture, 113, 114, 117, 122, 124,
128
Vox, Andreas, 13, 80, 302, 351

Wall, Larry, 63
Walther, Michele, 213
Warnock, John, 159
Watson, Theo, 214
Westenberg, Peter, 187, 213
What You See Is What You Get, 25, 61–
65, 283
Wilkinson, Jamie, 214
Williams, Claire, 99
Williams, George, 14, 23, 79, 299, 351
Wishlist, 246
Wium Lie, Håkon, 142, 146
Workflow, 52, 53, 60, 105, 109, 115, 119,
298, 300, 312
World Wide Web Consortium, 135–142,
145, 146, 304
XML, 52, 80, 144, 148, 158, 163, 175,
213, 214, 216, 233, 234,
301
Yildirim, Muharrem, 242, 245
Yuill, Simon, 232

Free Art License 1.3. (C) Copyleft Attitude, 2007. You can make reproductions and distribute this license verbatim (without any changes). Translation: Jonathan Clarke, Benjamin
Jean, Griselda Jung, Fanny Mourguet, Antoine Pitrou. Thanks to framalang.org
PREAMBLE

The Free Art License grants the right to freely
copy, distribute, and transform creative works
without infringing the author’s rights.
The Free Art License recognizes and protects
these rights. Their implementation has been
reformulated in order to allow everyone to use
creations of the human mind in a creative manner, regardless of their types and ways of expression.
While the public’s access to creations of the human mind usually is restricted by the implementation of copyright law, it is favoured by
the Free Art License. This license intends to
allow the use of a works resources; to establish
new conditions for creating in order to increase
creation opportunities. The Free Art License
grants the right to use a work, and acknowledges the right holders and the users rights and
responsibility.
The invention and development of digital technologies, Internet and Free Software have
changed creation methods: creations of the
human mind can obviously be distributed, exchanged, and transformed. They allow to produce common works to which everyone can
contribute to the benefit of all.
The main rationale for this Free Art License
is to promote and protect these creations of
the human mind according to the principles
of copyleft: freedom to use, copy, distribute,
transform, and prohibition of exclusive appropriation.
DEFINITIONS

“work” either means the initial work, the subsequent works or the common work as defined
hereafter:
“common work” means a work composed of the
initial work and all subsequent contributions to
it (originals and copies). The initial author is
the one who, by choosing this license, defines
the conditions under which contributions are
made.
“Initial work” means the work created by the
initiator of the common work (as defined
above), the copies of which can be modified by
whoever wants to
“Subsequent works” means the contributions
made by authors who participate in the evolution of the common work by exercising the
rights to reproduce, distribute, and modify that
are granted by the license.
“Originals” (sources or resources of the work)
means all copies of either the initial work or any
subsequent work mentioning a date and used

by their author(s) as references for any subsequent updates, interpretations, copies or reproductions.
“Copy” means any reproduction of an original
as defined by this license.
OBJECT

The aim of this license is to define the conditions under which one can use this work freely.
SCOPE

This work is subject to copyright law. Through
this license its author specifies the extent to
which you can copy, distribute, and modify it.
FREEDOM TO COPY (OR TO MAKE
REPRODUCTIONS)

You have the right to copy this work for yourself, your friends or any other person, whatever
the technique used.
FREEDOM TO DISTRIBUTE, TO
PERFORM IN PUBLIC

You have the right to distribute copies of this
work; whether modified or not, whatever the
medium and the place, with or without any
charge, provided that you: attach this license
without any modification to the copies of this
work or indicate precisely where the license can
be found, specify to the recipient the names of
the author(s) of the originals, including yours
if you have modified the work, specify to the
recipient where to access the originals (either
initial or subsequent). The authors of the originals may, if they wish to, give you the right to
distribute the originals under the same conditions as the copies.
FREEDOM TO MODIFY

You have the right to modify copies of the originals (whether initial or subsequent) provided
you comply with the following conditions: all
conditions in article 2.2 above, if you distribute
modified copies; indicate that the work has
been modified and, if it is possible, what kind
of modifications have been made; distribute the
subsequent work under the same license or any
compatible license. The author(s) of the original work may give you the right to modify it
under the same conditions as the copies.
RELATED RIGHTS

Activities giving rise to authors rights and
related rights shall not challenge the rights
granted by this license. For example, this is the
reason why performances must be subject to the
same license or a compatible license. Similarly,
integrating the work in a database, a compilation or an anthology shall not prevent anyone
from using the work under the same conditions
as those defined in this license.
INCORPORATION OF THE WORK

Incorporating this work into a larger work that
is not subject to the Free Art License shall not
challenge the rights granted by this license. If
the work can no longer be accessed apart from
the larger work in which it is incorporated, then
incorporation shall only be allowed under the

condition that the larger work is subject either
to the Free Art License or a compatible license.
COMPATIBILITY

A license is compatible with the Free Art License provided: it gives the right to copy, distribute, and modify copies of the work including for commercial purposes and without any
other restrictions than those required by the
respect of the other compatibility criteria; it
ensures proper attribution of the work to its
authors and access to previous versions of the
work when possible; it recognizes the Free Art
License as compatible (reciprocity); it requires
that changes made to the work be subject to the
same license or to a license which also meets
these compatibility criteria.
YOUR INTELLECTUAL RIGHTS

This license does not aim at denying your author’s rights in your contribution or any related
right. By choosing to contribute to the development of this common work, you only agree to
grant others the same rights with regard to your
contribution as those you were granted by this
license. Conferring these rights does not mean
you have to give up your intellectual rights.
YOUR RESPONSIBILITIES

The freedom to use the work as defined by
the Free Art License (right to copy, distribute,
modify) implies that everyone is responsible for
their own actions.
DURATION OF THE LICENSE

This license takes effect as of your acceptance
of its terms. The act of copying, distributing,
or modifying the work constitutes a tacit agreement. This license will remain in effect for as
long as the copyright which is attached to the
work. If you do not respect the terms of this
license, you automatically lose the rights that
it confers. If the legal status or legislation to
which you are subject makes it impossible for
you to respect the terms of this license, you may
not make use of the rights which it confers.
VARIOUS VERSIONS OF THE LICENSE

This license may undergo periodic modifications to incorporate improvements by its authors (instigators of the Copyleft Attitude
movement) by way of new, numbered versions.
You will always have the choice of accepting the
terms contained in the version under which the
copy of the work was distributed to you, or alternatively, to use the provisions of one of the
subsequent versions.
SUB-LICENSING

Sub-licenses are not authorized by this license.
Any person wishing to make use of the rights
that it confers will be directly bound to the authors of the common work.
LEGAL FRAMEWORK

This license is written with respect to both
French law and the Berne Convention for the
Protection of Literary and Artistic Works.

 

Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.