trans in Stalder 2018


of life. However,
the emergence of this condition pre-dates computer networks. In fact, it
has deep historical roots, some of which go back to the late nineteenth
century, but it really came into being after the late 1960s. As many of
the cultural and political institutions shaped by the previous condition
-- which McLuhan called the Gutenberg Galaxy -- fell into crisis, new
forms of personal and collective orientation and organization emerged
which have been shaped by the affordances of this new condition. Both
the historical processes which unfolded over a very long time and the
structural transformation which took place in a myriad of contexts have
been beyond any deliberate influence. Although obviously caused by
social actors, the magnitude of such changes was simply too great, too
distributed, and too complex to be attributed to, or molded by, any
particular (set of) actor(s).

Yet -- and this is the core of what motivated me to write this book --
this does not mean that we have somehow moved beyond the political,
beyond the realm in which identifiable actors and their projects do
indeed shape our collective []{#Page_vii type="pagebreak"
title="vii"}existence, or that there are no


ture
development already expressed within contemporary dynamics. On the
contrary, we can see very clearly that as the center -- the established
institutions shaped by the affordances of the previous condition -- is
crumbling, more economic and political projects are rushing in to fill
that void with new institutions that advance their competing agendas.
These new institutions are well adapted to the digital condition, with
its chaotic production of vast amounts of information and innovative
ways of dealing with that.

From this, two competing trajectories have emerged which are
simultaneously transforming the space of the political. First, I used
the term "post-democracy" because it expands possibilities, and even
requirements, of (personal) participation, while ever larger aspects of
(collective) decision-making are moved to arenas that are structurally
disconnected from those of participation. In effect, these arenas are
forming an authoritarian reality in which a small elite is vastly
empowered at the expense of everyone else. The purest incarnation of
this tendency can be seen in the commercial social mass media, such as
Facebook, Google, and the others, as they were newly formed in


hey have not had
to deal with institutional legacies. But both tendencies are no longer
confined to digital networks and are spreading across all aspects of
social life, creating a reality that is, on the structural level,
surprisingly coherent and, on the social and political level, full of
contradictions and thus opportunities.[]{#Page_viii type="pagebreak"
title="viii"}

I traced some aspects of these developments right up to early 2016, when
the German version of this book went into production. Since then a lot
has happened, but I resisted the temptation to update the book for the
English translation because ideas are always an expression of their
historical moment and, as such, updating either turns into a completely
new version or a retrospective adjustment of the historical record.

What has become increasingly obvious during 2016 and into 2017 is that
central institutions of liberal democracy are crumbling more quickly and
dramatically than was expected. The race to replace them has kicked into
high gear. The main events driving forward an authoritarian renewal of
politics took place on a national level, in particular the vote by the
UK to leave the EU (Brexit) and the election o


ological sovereignty": to bring the technological infrastructure,
and its developmental potential, back under the control of those who are
using it and are affected by it; that is, the citizens of the
metropolis.

Over the last 18 months, the imbalances between the two trajectories
have become even more extreme because authoritarian tendencies and
surveillance capitalism have been strengthened more quickly than the
commons-oriented practices could establish themselves. But it does not
change the fact that there are fundamental alternatives embedded in the
digital condition. Despite structural transformations that affect how we
do things, there is no inevitability about what we want to do
individually and, even more importantly, collectively.

::: {.poem}
::: {.lineGroup}
Zurich/Vienna, July 2017[]{#Page_ix type="pagebreak" title="ix"}
:::
:::
:::

[Acknowledgments]{.chapterTitle} {#ack}

::: {.section}
While it may be conventional to cite one person as the author of a book,
writing is a process with many collective elements. This book in
particular draws upon many sources, most of which I am no longer able to
acknowledge with any certainty. Far too often,


o the manuscript: Leonhard Dobusch,
Günther Hack, Katja Meier, Florian Cramer, Cornelia Sollfrank, Beat
Brogle, Volker Grassmuck, Ursula Stalder, Klaus Schönberger, Konrad
Becker, Armin Medosch, Axel Stockburger, and Gerald Nestler. Special
thanks are owed to Rebina Erben-Hartig, who edited the original German
manuscript and greatly improved its readability. I am likewise grateful
to Heinrich Greiselberger and Christian Heilbronn of the Suhrkamp
Verlag, whose faith in the book never wavered despite several delays.
Regarding the English version at hand, it has been a privilege to work
with a translator as skillful as Valentine Pakis. Over the past few
years, writing this book might have been the most import­ant project in
my life had it not been for Andrea Mayr. In this regard, I have been
especially fortunate.[]{#Page_xi type="pagebreak"
title="xi"}[]{#Page_xii type="pagebreak" title="xii"}
:::

Introduction [After the End of the Gutenberg Galaxy]{.chapterTitle} []{.chapterSubTitle} {#cintro}

::: {.section}
The show had already been going on for more than three hours, but nobody
was bothered by t


ation of artificiality and naturalness is equally present in
Berndnaut Smilde\'s photographic work of a real indoor cloud (*Nimbus*,
2010) on the cover of this book. Conchita\'s performance was also on a
formal level seemingly paradoxical: extremely focused and completely
open. Unlike most of the other acts, she took the stage alone, and
though she hardly moved at all, she nevertheless incited the audience to
participate in numerous ways and genuinely to act out the motto of the
contest ("Join us!"). Throughout the early rounds of the competition,
the beard, which was at first so provocative, transformed into a
free-floating symbol that the public began to appropriate in various
ways. Men and women painted Conchita-like beards on their faces,
newspapers printed beards to be cut out, and fans crocheted beards. Not
only did someone Photoshop a beard on to a painting of Empress Sissi of
Austria, but King Willem-Alexander of the Netherlands even tweeted a
deceptively realistic portrait of his wife, Queen Máxima, wearing a
beard. From one of the biggest stages of all, the evening of Wurst\'s
victory conveyed an impression of how much the culture of Europe had
changed in recent years, both in


e already
formed whose contours are easy to identify not only in niche sectors but
in the mainstream. Shortly before Conchita\'s triumph, Facebook thus
expanded the gender-identity options for its billion-plus users from 2
to 60. In addition to "male" and "female," users of the English version
of the site can now choose from among the following categories:

::: {.extract}
Agender, Androgyne, Androgynes, Androgynous, Asexual, Bigender, Cis, Cis
Female, Cis Male, Cis Man, Cis Woman, Cisgender, Cisgender Female,
Cisgender Male, Cisgender Man, Cisgender Woman, Female to Male (FTM),
Female to Male Trans Man, Female to Male Transgender Man, Female to Male
Transsexual Man, Gender Fluid, Gender Neutral, Gender Nonconforming,
Gender Questioning, Gender Variant, Genderqueer, Hermaphrodite,
Intersex, Intersex Man, Intersex Person, Intersex Woman, Male to Female
(MTF), Male to Female Trans Woman, Male to Female Transgender Woman,
Male to Female Transsexual Woman, Neither, Neutrois, Non-Binary, Other,
Pangender, Polygender, T\*Man, Trans, Trans Female, Trans Male, Trans
Man, Trans Person, Trans\*Female, Trans\*Male, Trans\*Man,
Trans\*Person, Trans\*Woman, Transexual, Transexual Female, Transexual
Male, Transexual Man, Transexual Person, Transexual Woman, Transgender
Female, Transgender Person, Transmasculine, T\*Woman, Two\*Person,
Two-Spirit, Two-Spirit Person.
:::

This enormous proliferation of cultural possibilities is an expression
of what I will refer to below as the digital condition. Far from being
universally welcomed, its growing presence has also instigated waves of
nostalgia, diffuse resentments, and intellectual panic. Conservative and
reactionary movements, which oppose such developments and desire to
preserve or even re-create previous conditions, have been on the rise.
Likewise in 2014, for instance, a cultural dispute broke out in normally
subdued Baden-Würtemberg over w


sition of acceptance with
respect to sexual diversity."[^2^](#f6-note-0002){#f6-note-0002a} In a
short period of time, a campaign organized mainly through social mass
media collected more than 200,000 signatures in opposition to the
proposal and submitted them to the petitions committee at the state
parliament. At that point, the government responded by putting the
initiative on ice. However, according to the analysis presented in this
book, leaving it on ice creates a precarious situation.

The rise and spread of the digital condition is the result of a
wide-ranging and irreversible cultural transformation, the beginnings of
which can in part be traced back to the nineteenth century. Since the
1960s, however, this shift has accelerated enormously and has
encompassed increasingly broader spheres of social life. More and more
people have been participating in cultural processes; larger and larger
dimensions of existence have become battlegrounds for cultural disputes;
and social activity has been intertwined with increasingly complex
technologies, without which it would hardly be possible to conceive of
these processes, let alone achieve them. The number of competing
cultural projects, wo


draws from many sources; it is
motivated by the widest possible variety of desires, intentions, and
compulsions; and it mobilizes whatever resources might be necessary for
the constitution of meaning. This emphasis on the materiality of culture
is also reflected in the concept of the digital. Media are relational
technologies, which means that they facilitate certain types of
connection between humans and
objects.[^7^](#f6-note-0007){#f6-note-0007a} "Digital" thus denotes the
set of relations that, on the infrastructural basis of digital networks,
is realized today in the production, use, and transform­ation of
material and immaterial goods, and in the constitution and coordination
of personal and collective activity. In this regard, the focus is less
on the dominance of a certain class []{#Page_8 type="pagebreak"
title="8"}of technological artifacts -- the computer, for instance --
and even less on distinguishing between "digital" and "analog,"
"material" and "immaterial." Even in the digital condition, the analog
has not gone away. Rather, it has been re-evaluated and even partially
upgraded. The immaterial, moreover, is never entirely without
materiality. On the contrary, the fleetin


are also not so new after all. Many of them
have existed for a long time. At first most of them were totally
separate from the technologies for which, later on, they would become
relevant. It is only in retrospect that these developments can be
identified as beginnings, and it can be seen that much of what we regard
today as novel or revolutionary was in fact introduced at the margins of
society, in cultural niches that were unnoticed by the dominant actors
and institutions. The new technologies thus evolved against a
[]{#Page_11 type="pagebreak" title="11"}background of processes of
societal transformation that were already under way. They could only
have been developed once a vision of their potential had been
formulated, and they could only have been disseminated where demand for
them already existed. This demand was created by social, political, and
economic crises, which were themselves initiated by changes that were
already under way. The new technologies seemed to provide many differing
and promising answers to the urgent questions that these crises had
prompted. It was thus a combination of positive vision and pressure that
motivated a great variety of actors to change, at times


age in order to intervene in the discourse. The rise of
the knowledge economy, the increasingly loud critique of
heteronormativity, and a fundamental cultural critique posed by
post-colonialism enabled a greater number of people to participate in
public discussions. In what follows, I will subject each of these three
phenomena to closer examin­ation. In order to do justice to their
complexity, I will treat them on different levels: I will depict the
rise of the knowledge economy as a structural change in labor; I will
reconstruct the critique of heteronormativity by outlining the origins
and transformations of the gay movement in West Germany; and I will
discuss post-colonialism as a theory that introduced new concepts of
cultural multiplicity and hybridization -- concepts that are now
influencing the digital condition far beyond the limits of the
post-colonial discourse, and often without any reference to this
discourse at all.

::: {.section}
### The growth of the knowledge economy {#c1-sec-0003}

At the beginning of the 1950s, the Austrian-American economist Fritz
Machlup was immersed in his study of the polit­ical economy of
monopoly.[^4^](#c1-note-0004){#c1-note-0004a} Among other


en and
women alike -- who, in one form or another, had something to do with
information processing and communication. Yet all of this required not
only new management techniques. Production and products also became more
complex, so that entire corporate sectors had to be restructured.
Whereas the first decisive inventions of the industrial era were still
made by more or less educated tinkerers, during the last third of the
nineteenth century, invention itself came to be institutionalized. In
Germany, Siemens (founded in 1847 as the Telegraphen-Bauanstalt von
Siemens & Halske) exemplifies this transformation. Within 50 years, a
company that began in a proverbial workshop in a Berlin backyard became
a multinational high-tech corporation. It was in such corporate
laboratories, which were established around the year 1900, that the
"industrialization of invention" or the "scientification of industrial
production" took place.[^8^](#c1-note-0008){#c1-note-0008a} In other
words, even the processes employed in factories and the goods that they
produced became knowledge-intensive. Their invention, planning, and
production required a steadily growing expansion of activities, which
today we would re


h they attracted consumers to
products, were successful in selling to the American public the idea of
their nation entering World War I. Thus, a media industry in the modern
sense was born, and it expanded along with the rapidly growing market
for advertising.[^11^](#c1-note-0011){#c1-note-0011a}

In his studies of labor markets conducted at the beginning of the 1960s,
Machlup brought these previously separ­ate developments together and
thus explained the existence of an already advanced knowledge economy in
the United States. His arguments fell on extremely fertile soil, for an
intellectual transformation had taken place in other areas of science as
well. A few years earlier, for instance, cybernetics had given the
concepts "information" and "communication" their first scientifically
precise (if somewhat idiosyncratic) definitions and had assigned to them
a position of central importance in all scientific disciplines, not to
mention life in general.[^12^](#c1-note-0012){#c1-note-0012a} Machlup\'s
investigation seemed to confirm this in the case of the economy, given
that the knowledge economy was primarily concerned with information and
communication. Since then, numerous analyses, for


fied, refined, and criticized the idea that the
knowledge-based activities of the economy have become increasingly
important. In the 1970s this discussion was associated above all with
the notion of the "post-industrial
society,"[^13^](#c1-note-0013){#c1-note-0013a} in the 1980s the guiding
idea was the "information society,"[^14^](#c1-note-0014){#c1-note-0014a}
and in the 1990s the debate revolved around the "network
society"[^15^](#c1-note-0015){#c1-note-0015a} -- to name just the most
popular concepts. What these approaches have in common is that they each
diagnose a comprehensive societal transformation that, as regards the
creation of economic value or jobs, has shifted the balance from
productive to communicative activ­ities. Accordingly, they presuppose
that we know how to distinguish the former from the latter. This is not
unproblematic, however, because in practice the two are usually tightly
intertwined. Moreover, whoever maintains that communicative activities
have taken the place of industrial production in our society has adopted
a very narrow point of []{#Page_17 type="pagebreak" title="17"}view.
Factory jobs have not simply disappeared; they have just been partially
reloc


to reduce the effectiveness of this
analysis -- especially its political effectiveness -- for it does more
than simply describe a condition. It also contains a set of political
instructions that imply or directly demand that precisely those sectors
should be promoted that it considers economically promising, and that
society should be reorganized accordingly. Since the 1970s, there has
thus been a feedback loop between scientific analysis and political
agendas. More often than not, it is hardly possible to distinguish
between the two. Especially in Britain and the United States, the
economic transformation of the 1980s was imposed insistently and with
political calculation (the weakening of labor unions).

There are, however, important differences between the developments of
the so-called "post-industrial society" of the 1970s and those of the
so-called "network society" of the 1990s, even if both terms are
supposed to stress the increased significance of information, knowledge,
and communication. With regard to the digital condition, the most
important of these differences are the greater flexibility of economic
activity in general and employment relations in particular, as well as
the


ructuring experts, and new
companies began to promote flat hierarchies, self-responsibility, and
innovation; with these aims in mind, they set about reorganizing large
corporations into small and flexible units. Labor and leisure were no
longer supposed to be separated, for all aspects of a given person could
be integrated into his or her work. In order to achieve economic success
in this new capitalism, it became necessary for every individual to
identify himself or herself with his or her profession. Large
corporations were restructured in such a way that entire departments
found themselves transformed into independent "profit centers." This
happened in the name of creating more leeway for decision-making and of
optimizing the entrepreneurial spirit on all levels, the goals being to
increase value creation and to provide management with more fine-grained
powers of intervention. These measures, in turn, created the need for
computers and the need for them to be networked. Large corporations
reacted in this way to the emergence of highly specialized small
companies which, by networking and cooperating with other firms,
succeeded in quickly and flexibly exploiting niches in the expanding


oductive entity was no longer the individual company but rather
the network consisting of companies and corporate divisions of various
sizes. In Castells\'s estimation, the decisive advantage of the network
is its ability to customize its elements and their configuration
[]{#Page_19 type="pagebreak" title="19"}to suit the rapidly changing
requirements of the "project" at
hand.[^19^](#c1-note-0019){#c1-note-0019a} Aside from a few exceptions,
companies in their trad­itional forms came to function above all as
strategic control centers and as economic and legal units.

This economic structural transformation was already well under way when
the internet emerged as a mass medium around the turn of the millennium.
As a consequence, change became more radical and penetrated into an
increasing number of areas of value creation. The political agenda
oriented itself toward the vision of "creative industries," a concept
developed in 1997 by the newly elected British government under Tony
Blair. A Creative Industries Task Force was established right away, and
its first step was to identify "those activities which have their
origins in individual creativity, skill and talent and which have the
pote


Which He Lives* was
screened at the Berlin International Film Festival and then, shortly
thereafter, broadcast on public television in North Rhine-Westphalia.
The film, which is firmly situated in the agitprop tradition,
[]{#Page_23 type="pagebreak" title="23"}follows a young provincial man
through the various milieus of Berlin\'s gay subcultures: from a
monogamous relationship to nightclubs and public bathrooms until, at the
end, he is enlightened by a political group of men who explain that it
is not possible to lead a free life in a niche, as his own emancipation
can only be achieved by a transformation of society as a whole. The film
closes with a not-so-subtle call to action: "Out of the closets, into
the streets!" Von Praunheim understood this emancipation to be a process
that encompassed all areas of life and had to be carried out in public;
it could only achieve success, moreover, in solidarity with other
freedom movements such as the Black Panthers in the United States and
the new women\'s movement. The goal, according to this film, is to
articulate one\'s own identity as a specific and differentiated identity
with its own experiences, values, and reference systems, and to anch


Rüdiger Lautmann
was already prepared to maintain: "To be homosexual has become
increasingly normalized, even if homophobia lives on in the depths of
the collective disposition."[^33^](#c1-note-0033){#c1-note-0033a} This
normalization was also reflected in a study published by the Ministry of
Justice in the year 2000, which stressed "the similarity between
homosexual and heterosexual relationships" and, on this basis, made an
argument against discrimination.[^34^](#c1-note-0034){#c1-note-0034a}
Around the year 2000, however, the classical gay movement had already
passed its peak. A profound transformation had begun to take place in
the middle of the 1990s. It lost its character as a new social movement
(in the style of the 1970s) and began to splinter inwardly and
outwardly. One could say that it transformed from a mass movement into a
multitude of variously networked communities. The clearest sign of this
transformation is the abbreviation "LGBT" (lesbian, gay, bisexual, and
transgender), which, since the mid-1990s, has represented the internal
heterogeneity of the movement as it has shifted toward becoming a
network.[^35^](#c1-note-0035){#c1-note-0035a} At this point, the more
radical actors were already speaking against the normalization of
homosexuality. Queer theory, for example, was calling into question the
"essentialist" definition of gender []{#Page_27 type="pagebreak"
title="27"}--


by underscoring mutability, hybridity, and
uniqueness. Both the scope of what could be expressed in public and the
circle of potential speakers expanded yet again. And, at least to some
extent, the drag queen Conchita Wurst popularized complex gender
constructions that went beyond the simple woman/man dualism. All of that
said, the assertion by Rüdiger Lautmann quoted above -- "homophobia
lives on in the depths of the collective dis­position" -- continued to
hold true.

If the gay movement is representative of the social liber­ation of the
1970s and 1980s, then it is possible to regard its transformation into
the LGBT movement during the 1990s -- with its multiplicity and fluidity
of identity models and its stress on mutability and hybridity -- as a
sign of the reinvention of this project within the context of an
increasingly dominant digital condition. With this transformation,
however, the diversification and fluidification of cultural practices
and social roles have not yet come to an end. Ways of life that were
initially subcultural and facing existential pressure []{#Page_28
type="pagebreak" title="28"}are gradually entering the mainstream. They
are expanding the range of readily available models of identity for
anyone who might be interested, be it with respect to family forms
(e.g., patchwork families, adoption by same-sex couples), diets (e.g.,
vegetarianism and veganism), healthcare (e.g., anti-vaccination), or
other principles of life and belief. A


in this way that
symbols of authority are hybridized and made into something of one\'s
own. For me, hybridization is not simply a mixture but rather a
[]{#Page_31 type="pagebreak" title="31"}strategic and selective
appropriation of meanings; it is a way to create space for negotiators
whose freedom and equality are
endangered.[^44^](#c1-note-0044){#c1-note-0044a}
:::

Hybridization is thus a cultural strategy for evading marginality that
is imposed from the outside: subjects, who from the dominant perspective
are incapable of doing so, appropriate certain aspects of culture for
themselves and transform them into something else. What is decisive is
that this hybrid, created by means of active and unauthorized
appropriation, opposes the dominant version and the resulting speech is
thus legitimized from another -- that is, from one\'s own -- position.
In this way, a cultural engagement is set under way and the superiority
of one meaning or another is called into question. Who has the right to
determine how and why a relationship with others should be entered,
which resources should be appropriated from them, and how these
resources should be used? At the heart of the matter lie the abilitie


h anything goes, yet the central meaning of
negotiation, the contextuality of consensus, and the mutability of every
frame of reference []{#Page_32 type="pagebreak" title="32"}-- none of
which can be shared equally by everyone -- are always potentially
negotiable.

Post-colonialism draws attention to the "disruptive power of the
excluded-included third," which becomes especially virulent when it
"emerges in the middle of semantic
structures."[^46^](#c1-note-0046){#c1-note-0046a} The recognition of
this power reveals the increasing cultural independence of those
formerly colonized, and it also transforms the cultural self-perception
of the West, for, even in Western nations that were not significant
colonial powers, there are multifaceted tensions between dominant
cultures and those who are on the defensive against discrimination and
attributions by others. Instead of relying on the old recipe of
integration through assimilation (that is, the dissolution of the
"other"), the right to self-determined difference is being called for
more emphatically. In such a manner, collective identities, such as
national identities, are freed from their questionable appeals to
cultural homogeneity and es


nd to its catastrophic social and ecological consequences,
with a new and comprehensive manner of seeing and acting that was
unrestricted by economics.

Toward the end of the 1970s, this expanded notion of design owed less
and less to emancipatory social movements, and its socio-political goals
began to fall by the wayside. Three fundamental patterns survived,
however, which go beyond design and remain characteristic of the
culturalization []{#Page_37 type="pagebreak" title="37"}of the economy:
the discovery of the public as emancipated users and active
participants; the use of appropriation, transformation, and
recombination as methods for creating ever-new aesthetic
differentiations; and, finally, the intention of shaping the lifeworld
of the user.[^57^](#c1-note-0057){#c1-note-0057a}

As these patterns became depoliticized and commercialized, the focus of
designing the "lifeworld" shifted more and more toward designing the
"experiential world." By the end of the 1990s, this had become so
normalized that even management consultants could assert that
"\[e\]xperiences represent an existing but previously unarticulated
*genre of economic output*."[^58^](#c1-note-0058){#c1-note-0058a} It w


om tiny batches of creative-industrial products all the way to
global processes of "mass customization," in which factory-based mass
production is combined with personalization. One of the first
applications of this was introduced in 1999 when, through its website, a
sporting-goods company allowed customers to design certain elements of a
shoe by altering it within a set of guidelines. This was taken a step
further by the idea of "user-centered innovation," which relies on the
specific knowledge of users to enhance a product, with the additional
hope of discovering unintended applications and transforming these into
new areas of business.[^63^](#c1-note-0063){#c1-note-0063a} It has also
become possible for end users to take over the design process from the
beginning, which has become considerably easier with the advent of
specialized platforms for exchanging knowledge, alongside semi-automated
production tools such as mechanical mills and 3D printers.
Digitalization, which has allowed all content to be processed, and
networking, which has created an endless amount of content ("raw
material"), have turned appropriation and recombination into general
methods of cultural production.[^64^](#


erior organized a conference to investigate
faster methods of data processing. Two methods were tested for making
manual labor more efficient, one of which had the potential to achieve
greater efficiency by means of novel data-processing machines. The
latter system emerged as the clear victor; developed by an engineer
named Hermann Hollerith, it mechanically processed and stored data on
punch cards. The idea was based on Hollerith\'s observations of the
coup­ling and decoupling of railroad cars, which he interpreted as
modular units that could be combined in any desired order. The punch
card transferred this approach to information []{#Page_41
type="pagebreak" title="41"}management. Data were no longer stored in
fixed, linear arrangements (tables and lists) but rather in small units
(the punch cards) that, like railroad cars, could be combined in any
given way. The increase in efficiency -- with respect to speed *and*
flexibility -- was enormous, and nearly a hundred of Hollerith\'s
machines were used by the Census
Bureau.[^65^](#c1-note-0065){#c1-note-0065a} This marked a turning point
in the history of information processing, with technical means no longer
being used exclusively to st


ith his slogan "the medium is the message." He maintained that every
medium of communication, by means of its media-specific characteristics,
directly affected the consciousness, self-perception, and worldview of
every individual.[^70^](#c1-note-0070){#c1-note-0070a} This, he
believed, happens independently of and in addition to whatever specific
message a medium might be conveying. From this perspective, reality does
not exist outside of media, given that media codetermine our personal
relation to and behavior in the world. For McLuhan and the Toronto
School, media were thus not channels for transporting content but rather
the all-encompassing environments -- galaxies -- in which we live.

Such ideas were circulating much earlier and were intensively developed
by artists, many of whom were beginning to experiment with new
electronic media. An important starting point in this regard was the
1963 exhibit *Exposition of Music -- Electronic Television* by the
Korean artist Nam June Paik, who was then collaborating with Karlheinz
Stockhausen in Düsseldorf. Among other things, Paik presented 12
television sets, the screens of which were "distorted" by magnets. Here,
however, "distorted" is a


heaper. In the name of "tactical
media," a new generation of artistic and political media activists came
together in the middle of the
1990s.[^76^](#c1-note-0076){#c1-note-0076a} They combined the "camcorder
revolution," which in the late 1980s had made video equipment available
to broader swaths of society, stirring visions of democratic media
production, with the newly arrived medium of the internet. Despite still
struggling with numerous technical difficulties, they remained constant
in their belief that the internet would solve the hitherto intractable
problem of distributing content. The transition from analog to digital
media lowered the production hurdle yet again, not least through the
ongoing development of improved software. Now, many stages of production
that had previously required professional or semi-professional expertise
and equipment could also be carried out by engaged laymen. As a
consequence, the focus of interest broadened to include not only the
development of alternative production groups but also the possibility of
a flexible means of rapid intervention in existing structures. Media --
both television and the internet -- were understood as environments in
which one could act without directly representing a reality outside of
the media. Television was analyzed down to its own legalities, which
could then be manipulated to affect things beyond the media.
Increasingly, culture jamming and the campaigns of so-called
communication guerrillas were blurring the difference between media and
political activity.[^77[]{#Page_47 type="pagebreak"
title="47"}^](#c1-note-0077){#c1-note-0077a}

This difference was dissolved entirely by a new generation of
politically motivated artists, activists, and hackers, who transferred
the tactics of civil disobedience -- blockading a building with a
sit-in, for instance -- to the
internet.[^78^](#c1-note-0078){#c1-note-0078a} When, in 1994, the
Zapatista Army of National Liberation rose up in the south of Mexico,
several media projects were created to support its mostly peaceful
opposition and to make the movement known in Europe and North America.
As part of this loose network, in 1998 the American artist collective
Electronic Disturbance Theater developed a relatively simple computer
program called FloodNet that enabled networked sympathizers to shut down
websites,


ving conflicts -- commands (by kings and presidents) and votes --
were dismissed. Implemented in their place was a pragmatics of open
cooperation that was oriented around two guiding principles. The first
was that different views should be discussed without a single individual
being able to block any final decisions. Such was the meaning of the
expression "rough consensus." The second was that, in accordance with
the classical engineering tradition, the focus should remain on concrete
solutions that had to be measured against one []{#Page_52
type="pagebreak" title="52"}another on the basis of transparent
criteria. Such was the meaning of the expression "running code." In
large part, this method was possible because the group oriented around
these principles was, internally, relatively homogeneous: it consisted
of top-notch computer scientists -- all of them men -- at respected
American universities and research centers. For this very reason, many
potential and fundamental conflicts were avoided, at least at first.
This internal homogeneity lends rather dark undertones to their sunny
vision, but this was hardly recognized at the time. Today these
undertones are far more apparent, and I wi


tware developers\' immediate environment experienced its first
major change in the late 1970s. Software, which for many had been a mere
supplement to more expensive and highly specialized hardware, became a
marketable good with stringent licensing restrictions. A new generation
of businesses, led by Bill Gates, suddenly began to label co­operation
among programmers as theft.[^90^](#c1-note-0090){#c1-note-0090a}
Previously it had been par for the course, and above all necessary, for
programmers to share software with one another. The former culture of
horizontal cooperation between developers transformed into a
hierarchical and commercially oriented relation between developers and
users (many of whom, at least at the beginning, had developed programs
of their own). For the first time, copyright came to play an important
role in digital culture. In order to survive in this environment, the
practice of open cooperation had to be placed on a new legal foundation.
Copyright law, which served to separate programmers (producers) from
users (consumers), had to be neutralized or circumvented. The first step
in this direction was taken in 1984 by the activist and programmer
Richard Stallman. Comp


his same time that the technologies in question, which
were already no longer very new, entered mainstream society. Within a
few years, the internet became part of everyday life. Three years before
the turn of the millennium, only about 6 percent of the entire German
population used the internet, often only occasionally. Three years after
the millennium, the number of users already exceeded 53 percent. Since
then, this share has increased even further. In 2014, it was more than
97 percent for people under the age of
40.[^95^](#c1-note-0095){#c1-note-0095a} Parallel to these developments,
data transfer rates increased considerably, broadband connections ousted
the need for dial-up modems, and the internet was suddenly "here" and no
longer "there." With the spread of mobile devices, especially since the
year 2007 when the first iPhone was introduced, digital communication
became available both extensively and continuously. Since then, the
internet has been ubiquitous. The amount of time that users spend online
has increased and, with the rapid ascent of social mass media such as
Facebook, people have been online in almost every situation and
circumstance in life.[^96^](#c1-note-0096){#c1-n


pied by specific
(geographical and cultural) centers. Rather, a space has been opened up
for endless negotiations, a space in which -- at least in principle --
everything can be called into question. This is not, of course, a
peaceful and egalitarian process. In addition to the practical hurdles
that exist in polarizing societies, there are also violent backlashes
and new forms of fundamentalism that are attempting once again to remove
certain religious, social, cultural, or political dimensions of
existence from the discussion. Yet these can only be understood in light
of a sweeping cultural transformation that has already reached
mainstream society.[^98^](#c1-note-0098){#c1-note-0098a} In other words,
the digital condition has become quotidian and dominant. It forms a
cultural constellation that determines all areas of life, and its
characteristic features are clearly recognizable. These will be the
focus of the next chapter.[]{#Page_57 type="pagebreak" title="57"}
:::

::: {.section .notesSet type="rearnotes"}
[]{#notesSet}Notes {#c1-ntgp-9999}
------------------

::: {.section .notesList}
[1](#c1-note-0001a){#c1-note-0001}  Kathrin Passig and Sascha Lobo,
*Internet: Segen oder Fluc


he
Computerization of Society: A Report to the President of France*
(Cambridge, MA: MIT Press, 1980).

[15](#c1-note-0015a){#c1-note-0015}  Manuel Castells, *The Rise of the
Network Society* (Oxford: Blackwell, 1996).

[16](#c1-note-0016a){#c1-note-0016}  Hans-Dieter Kübler, *Mythos
Wissensgesellschaft: Gesellschaft­licher Wandel zwischen Information,
Medien und Wissen -- Eine Einführung* (Wiesbaden: Verlag für
Sozialwissenschaften, 2009).[]{#Page_178 type="pagebreak" title="178"}

[17](#c1-note-0017a){#c1-note-0017}  Luc Boltanski and Ève Chiapello,
*The New Spirit of Capitalism*, trans. Gregory Elliott (London: Verso,
2005).

[18](#c1-note-0018a){#c1-note-0018}  Michael Piore and Charles Sabel,
*The Second Industrial Divide: Possibilities of Prosperity* (New York:
Basic Books, 1984).

[19](#c1-note-0019a){#c1-note-0019}  Castells, *The Rise of the Network
Society*. For a critical evaluation of Castells\'s work, see Felix
Stalder, *Manuel Castells and the Theory of the Network Society*
(Cambridge: Polity, 2006).

[20](#c1-note-0020a){#c1-note-0020}  "UK Creative Industries Mapping
Documents" (1998); quoted from Terry Flew, *The Creative Industries:
Culture and Policy* (


[--trans.\].

[33](#c1-note-0033a){#c1-note-0033}  Quoted from Regener and Köppert,
*Privat/öffentlich*, p. 7 \[--trans.\].

[34](#c1-note-0034a){#c1-note-0034}  Hans-Peter Buba and László A.
Vaskovics, *Benachteiligung gleichgeschlechtlich orientierter Personen
und Paare: Studie im Auftrag des Bundesministerium der Justiz* (Cologne:
Bundes­anzeiger, 2001).

[35](#c1-note-0035a){#c1-note-0035}  This process of internal
differentiation has not yet reached its conclusion, and thus the
acronyms have become longer and longer: LGBPTTQQIIAA+ stands for

lesbian, gay, bisexual, pansexual, transgender, transsexual, queer,
questioning, intersex, intergender, asexual, ally.
[36](#c1-note-0036a){#c1-note-0036}  Judith Butler, *Gender Trouble:
Feminism and the Subversion of Identity* (New York: Routledge, 1989).

[37](#c1-note-0037a){#c1-note-0037}  Andreas Krass, "Queer Studies: Eine
Einführung," in Krass (ed.), *Queer denken: Gegen die Ordnung der
Sexualität* (Frankfurt am Main: Suhrkamp, 2003), pp. 7--27.

[38](#c1-note-0038a){#c1-note-0038}  Edward W. Said, *Orientalism* (New
York: Vintage Books, 1978).

[39](#c1-note-0039a){#c1-note-0039}  Kark August Wittfogel, *Oriental
Despotism: A


xclude feelings of
regret about the loss of an exotic and romantic way of life, such as
those of T. E. Lawrence, whose activities in the Near East during the
First World War were memorialized in the film *Lawrence of Arabia*
(1962).

[43](#c1-note-0043a){#c1-note-0043}  Said has often been criticized,
however, for portraying orientalism so dominantly that there seems to be
no way out of the existing dependent relations. For an overview of the
debates that Said has instigated, see María do Mar Castro Varela and
Nikita Dhawan, *Postkoloniale Theorie: Eine kritische Ein­führung*
(Bielefeld: Transcript, 2005), pp. 37--46.

[44](#c1-note-0044a){#c1-note-0044}  "Migration führt zu 'hybrider'
Gesellschaft" (an interview with Homi K. Bhabha), *ORF Science*
(November 9, 2007), online \[--trans.\].

[45](#c1-note-0045a){#c1-note-0045}  Homi K. Bhabha, *The Location of
Culture* (New York: Routledge, 1994), p. 4.

[46](#c1-note-0046a){#c1-note-0046}  Elisabeth Bronfen and Benjamin
Marius, "Hybride Kulturen: Einleitung zur anglo-amerikanischen
Multikulturismusdebatte," in Bronfen et al. (eds), *Hybride Kulturen*
(Tübingen: Stauffenburg), pp. 1--30, at 8 \[--trans.\].

[47](#c1-note-0047a


s of Gastarbeiter*
(Munich: Trikont, 2013).

[49](#c1-note-0049a){#c1-note-0049}  The conference programs can be
found at: \<\>.

[50](#c1-note-0050a){#c1-note-0050}  "Deutschland entwickelt sich zu
einem attraktiven Einwanderungsland für hochqualifizierte Zuwanderer,"
press release by the CDU/CSU Alliance in the German Parliament (June 4,
2014), online \[--trans.\].

[51](#c1-note-0051a){#c1-note-0051}  Andreas Reckwitz, *Die Erfindung
der Kreativität: Zum Prozess gesellschaftlicher Ästhetisierung* (Berlin:
Suhrkamp, 2011), p. 180 \[--trans.\]. An English translation of this
book is forthcoming: *The Invention of Creativity: Modern Society and
the Culture of the New*, trans. Steven Black (Cambridge: Polity, 2017).

[52](#c1-note-0052a){#c1-note-0052}  Gert Selle, *Geschichte des Design
in Deutschland* (Frankfurt am Main: Campus, 2007).

[53](#c1-note-0053a){#c1-note-0053}  "Less Is More: The Design Ethos of
Dieter Rams," *SFMOMA* (June 29, 2011), online.[]{#Page_181
type="pagebreak" title="181"}

[54](#c1-note-0054a){#c1-note-0054}  The cybernetic perspective was
introduced to the field of design primarily by Buckminster Fuller. See
Diedrich Diederichsen and Anselm Franke, *The Whole Earth: California
and the Disappearance of the Outside* (Berlin: Sternberg


us: Stadtraum für Kunst, Kultur und Konsum im Zeitalter der
Erlebnisgesellschaft* (Saarbrücken: VDM Verlag Dr. Müller, 2013).

[60](#c1-note-0060a){#c1-note-0060}  Konrad Becker and Martin Wassermair
(eds), *Phantom Kulturstadt* (Vienna: Löcker, 2009).

[61](#c1-note-0061a){#c1-note-0061}  See, for example, Andres Bosshard,
*Stadt hören: Klang­spaziergänge durch Zürich* (Zurich: NZZ Libro,
2009).

[62](#c1-note-0062a){#c1-note-0062}  "An alternate realty game (ARG),"
according to Wikipedia, "is an interactive networked narrative that uses
the real world as a platform and employs transmedia storytelling to
deliver a story that may be altered by players\' ideas or actions."

[63](#c1-note-0063a){#c1-note-0063}  Eric von Hippel, *Democratizing
Innovation* (Cambridge, MA: MIT Press, 2005).

[64](#c1-note-0064a){#c1-note-0064}  It is often the case that the
involvement of users simply serves to increase the efficiency of
production processes and customer service. Many activities that were
once undertaken at the expense of businesses now have to be carried out
by the customers themselves. See Günter Voss, *Der arbeitende Kunde:
Wenn Konsumenten zu unbezahlten Mitarbeitern werden* (Frankfurt am Main:
Campus, 2005).

[65](#c1-note-0065a){#c1-note-0065}  Beniger, *The Control Revolution*,
pp. 411--16.

[66](#c1-note-0066a){#c1-note-0066}  Louis Althusser, "Ideology and
Ideological State Apparatuses (Notes towards an Investigation)," in
Althusser, *Lenin and Philosophy and Other Essays*, trans. Ben Brewster
(New York: Monthly Review Press, 1971), pp. 127--86.

[67](#c1-note-0067a){#c1-note-0067}  Florian Becker et al. (eds),
*Gramsci lesen! Einstiege in die Gefängnis­hefte* (Hamburg: Argument,
2013), pp. 20--35.

[68](#c1-note-0068a){#c1-note-0068}  Guy Debord, *The Society of the
Spectacle*, trans. Fredy Perlman and Jon Supak (Detroit: Black & Red,
1977).

[69](#c1-note-0069a){#c1-note-0069}  Derrick de Kerckhove, "McLuhan and
the Toronto School of Communication," *Canadian Journal of
Communication* 14/4 (1989): 73--9.[]{#Page_182 type="pagebreak"
title="182"}

[70](#c1-note-0070a){#c1-note-0070}  Marshall McLuhan, *Understanding
Media: The Extensions of Man* (New York: McGraw-Hill, 1964).

[71](#c1-note-0071a){#c1-note-0071}  Nam Jun Paik, "Exposition of Music
-- Electronic Television" (leaflet accompanying the exhibition). Quoted
from Zhang Ga, "Sounds, Images, Perception and El


Mark Dery, *Culture Jamming:
Hacking, Slashing and Sniping in the Empire of Signs* (Westfield: Open
Media, 1993); Luther Blisset et al., *Handbuch der
Kommunikationsguerilla*, 5th edn (Berlin: Assoziationen A, 2012).

[78](#c1-note-0078a){#c1-note-0078}  Critical Art Ensemble, *Electronic
Civil Disobedience and Other Unpopular Ideas* (New York: Autonomedia,
1996).

[79](#c1-note-0079a){#c1-note-0079}  Today this method is known as a
"distributed denial of service attack" (DDOS).

[80](#c1-note-0080a){#c1-note-0080}  Max Weber, *Economy and Society: An
Outline of Interpretive Sociology*, trans. Guenther Roth and Claus
Wittich (Berkeley, CA: University of California Press, 1978), pp. 26--8.

[81](#c1-note-0081a){#c1-note-0081}  Ernst Friedrich Schumacher, *Small
Is Beautiful: Economics as if People Mattered*, 8th edn (New York:
Harper Perennial, 2014).

[82](#c1-note-0082a){#c1-note-0082}  Fred Turner, *From Counterculture
to Cyberculture: Stewart Brand, the Whole Earth Movement and the Rise of
Digital Utopianism* (Chicago, IL: University of Chicago Press, 2006), p.
21. In this regard, see also the documentary films *Das Netz* by Lutz
Dammbeck (2003) and *All Watched Over by Mach


existence and
development depend on []{#Page_58 type="pagebreak" title="58"}communal
formations. "Algorithmicity" denotes those aspects of cultural processes
that are (pre-)arranged by the activities of machines. Algorithms
transform the vast quantities of data and information that characterize
so many facets of present-day life into dimensions and formats that can
be registered by human perception. It is impossible to read the content
of billions of websites. Therefore we turn to services such as Google\'s
search algorithm, which reduces the data flood ("big data") to a
manageable amount and translates it into a format that humans can
understand ("small data"). Without them, human beings could not
comprehend or do anything within a culture built around digital
technologies, but they influence our understanding and activity in an
ambivalent way. They create new dependencies by pre-sorting and making
the (informational) world available to us, yet simultaneously ensure our
autonomy by providing the preconditions that enable us to act.
:::

::: {.section}
Referentiality {#c2-sec-0002}
--------------

In the digital condition, one of the methods (if not *the* most
fundamental method) enablin


perations of various genres of professional and everyday
culture. In its essence, it is the use of materials that are already
equipped with meaning -- as opposed to so-called raw material -- to
create new meanings. The referential techniques used to achieve this are
extremely diverse, a fact reflected in the numerous terms that exist to
describe them: re-mix, re-make, re-enactment, appropriation, sampling,
meme, imitation, homage, tropicália, parody, quotation, post-production,
re-performance, []{#Page_59 type="pagebreak" title="59"}camouflage,
(non-academic) research, re-creativity, mashup, transformative use, and
so on.

These processes have two important aspects in common: the
recognizability of the sources and the freedom to deal with them however
one likes. The first creates an internal system of references from which
meaning and aesthetics are derived in an essential
manner.[^2^](#c2-note-0002){#c2-note-0002a} The second is the
precondition enabling the creation of something that is both new and on
the same level as the re-used material. This represents a clear
departure from the historical--critical method, which endeavors to embed
a source in its original context in order to re-determine its meaning,
but also a departure from classical forms of rendition such as
translations, adaptations (for instance, adapting a book for a film), or
cover versions, which, though they translate a work into another
language or medium, still attempt to preserve its original meaning.
Re-mixes produced by DJs are one example of the referential treatment of
source material. In his book on the history of DJ culture, the
journalist Ulf Poschardt notes: "The remixer isn\'t concerned with
salvaging authenticity, but with creating a new
authenticity."[^3^](#c2-note-0003){#c2-note-0003a} For instead of
distancing themselves from the past, which would follow the (Western)
logic of progress or the spirit of the avant-garde, these processes
refer explicitly to precursors and to existing materi


[^5^](#c2-note-0005){#c2-note-0005a} In today\'s referential
processes, on the contrary, pieces are not brought together as much as
they are integrated into one another by being altered, adapted, and
transformed. Unlike the older arrangement, it is not the fissures
between elements that are foregrounded but rather their synthesis in the
present. Conchita Wurst, the bearded diva, is not torn between two
conflicting poles. Rather, she represents a successful synthesis --
something new and harmonious that distinguishes itself by showcasing
elements of the old order (man/woman) and simultaneously transcending
them.

This synthesis, however, is usually just temporary, for at any time it
can itself serve as material for yet another rendering. Of course, this
is far easier to pull off with digital objects than with analog objects,
though these categories have become increasingly porous and thus
increasingly problematic as opposites. More and more objects exist both
in an analog and in a digital form. Think of photographs and slides,
which have become so easy to digitalize. Even three-dimensional objects
can now be scanned and printed. In the future, programmable materials
with controllable and


s that are, in
themselves, meaningless. Consequently, Florian Cramer has argued that
"every form of literature that is recorded alphabetically and not based
on analog parameters such as ideograms or orality is already digital in
that it is stored in discrete
signs."[^7^](#c2-note-0007){#c2-note-0007a} However, the specific
features of the alphabet, as Marshall McLuhan repeatedly underscored,
did not fully develop until the advent of the printing
press.[^8^](#c2-note-0008){#c2-note-0008a} It was the printing press, in
other words, that first abstracted written signs from analog handwriting
and transformed them into standardized symbols that could be repeated
without any loss of information. In this practical sense, the printing
press made writing digital, with the result that dealing with texts soon
became radically different.

::: {.section}
### Information overload 1.0 {#c2-sec-0003}

The printing press made texts available in the three respects mentioned
above. For one thing, their number increased rapidly, while their price
significantly sank. During the first two generations after Gutenberg\'s
invention -- that is, between 1450 and 1500 -- more books were produced
than during the tho


0019){#c2-note-0019a} And this was happening
despite protracted attempts to block or close down the file-sharing site
by legal means and despite a variety of competing services. Even when
the founders of the website were sentenced in Sweden to pay large fines
(around €3 million) and to serve time in prison, the site still did not
disappear from the internet.[^20^](#c2-note-0020){#c2-note-0020a} At the
same time, new providers have entered the market of free access; their
method is not to facilitate distributed downloads but rather to offer,
on account of the drastically reduced cost of data transfers, direct
streaming. Although some of these services are relatively easy to locate
and some have been legally banned -- the best-known case in Germany
being that of the popular site kino.to -- more of them continue to
appear.[^21^](#c2-note-0021){#c2-note-0021a} Moreover, this phenomenon
[]{#Page_67 type="pagebreak" title="67"}is not limited to music and
films, but encompasses all media formats. For instance, it is
foreseeable that the number of freely available plans for 3D objects
will increase along with the popularity of 3D printing. It has almost
escaped notice, however, that so-called


mostly from the
worlds of science fiction, comics/manga, or computer games -- by donning
home-made costumes and striking characteristic
poses.[^29^](#c2-note-0029){#c2-note-0029a} The often considerable
effort that goes into this is mostly reflected in the costumes, not in
the choreography or dramaturgy of the performance. What is significant
is that these costumes are usually not exact replicas but are rather
freely adapted by each player to represent the character as he or she
interprets it to be. Accordingly, "Cosplay is a form of appropriation
[]{#Page_74 type="pagebreak" title="74"}that transforms, actualizes and
performs an existing story in close connection to the fan\'s own
identity."[^30^](#c2-note-0030){#c2-note-0030a} This practice,
admittedly, goes back quite far in the history of fan culture, but it
has experienced a striking surge through the opportunity for fans to
network with one another around the world, to produce costumes and
images of professional quality, and to place themselves on the same
level as their (fictitious) idols. By now it has become a global
subculture whose members are active not only online but also at hundreds
of conventions throughout the world. In


erably
over the past few years, has slowly begun to professionalize, with
shops, books, and players who make paid appearances. Even in fan
culture, stars are born. As soon as the subculture has exceeded a
certain size, this gradual onset of commercialization will undoubtedly
lead to tensions within the community. For now, however, two of its
noteworthy features remain: the power of the desire to appropriate, in a
bodily manner, characters from vast cultural universes, and the
widespread combination of free interpretation and meticulous attention
to detail.
:::

::: {.section}
### Lineages and transformations {#c2-sec-0008}

Because of the great effort tha they require, re-enactment and cosplay
are somewhat extreme examples of singling out, appropriating, and
referencing. As everyday activities that almost take place incidentally,
however, these three practices usually do not make any significant or
lasting differences. Yet they do not happen just once, but over and over
again. They accumulate and thus constitute referentiality\'s second type
of activity: the creation of connections between the many things that
have attracted attention. In such a way, paths are forged through the
vast com


tics that deprives so many people of the resources
needed to take advantage of these new freedoms in their own lives. As a
result they suffer, in Ulrich Beck\'s terms, "permanent disadvantage."

Under the digital condition, this process has permeated the finest
structures of social life. Individualization, commercialization, and the
production of differences (through design, for instance) are ubiquitous.
Established civic institutions are not alone in being hollowed out;
relatively new collectives are also becoming more differentiated, a
development that I outlined above with reference to the transformation
of the gay movement into the LGBT community. Yet nevertheless, or
perhaps for this very reason, new forms of communality are being formed
in these offshoots -- in the small activities of everyday life. And
these new communal formations -- rather []{#Page_80 type="pagebreak"
title="80"}than individual people -- are the actual subjects who create
the shared meaning that we call culture.

::: {.section}
### The problem of the "community" {#c2-sec-0010}

I have chosen the rather cumbersome expression "communal formation" in
order to avoid the term "community" (*Gemeinschaft*), although th


sms, and it regards community as an intermediary level between
the individual and society.[^44^](#c2-note-0044){#c2-note-0044a} But
there is a related English term, which seems even more productive for my
purposes, namely "community of practice," a concept that is more firmly
grounded in the empirical observation of concrete social relationships.
The term was introduced at the beginning of the 1990s by the social
researchers Jean Lave and Étienne Wenger. They observed that, in most
cases, professional learning (for instance, in their case study of
midwives) does not take place as a one-sided transfer of knowledge or
proficiency, but rather as an open exchange, often outside of the formal
learning environment, between people with different levels of knowledge
and experience. In this sense, learning is an activity that, though
distinguishable, cannot easily be separated from other "normal"
activities of everyday life. As Lave and Wenger stress, however, the
community of practice is not only a social space of exchange; it is
rather, and much more fundamentally, "an intrinsic condition for the
existence of knowledge, not least because it provides the interpretive
support necessary for makin


e
amount of information from the excess of potentially available
information and brings it into a meaningful context, whereby it
validates the selection itself and orients the activity of each of its
members.

The new communal formations consist of self-referential worlds whose
constructive common practice affects the foundations of social activity
itself -- the constitution of space and time. How? The spatio-temporal
horizon of digital communication is a global (that is, placeless) and
ongoing present. The technical vision of digital communication is always
the here and now. With the instant transmission of information,
everything that is not "here" is inaccessible and everything that is not
"now" has disappeared. Powerful infrastructure has been built to achieve
these effects: data centers, intercontinental networks of cables,
satellites, high-performance nodes, and much more. Through globalized
high-frequency trading, actors in the financial markets have realized
this []{#Page_90 type="pagebreak" title="90"}technical vision to its
broadest extent by creating a never-ending global present whose expanse
is confined to milliseconds. This process is far from coming to an end,
for massive amounts of investment are allocated to accomplish even the
smallest steps toward this goal. On November 3, 2015, a 4,600-kilometer,
300-million-dollar transatlantic telecommunications cable (Hibernia
Express) was put into operation between London and New York -- the first
in more than 10 years -- with the single goal of accelerating automated
trading between the two places by 5.2 milliseconds.

For social and biological processes, this technical horizon of space and
time is neither achievable nor desirable. Such processes, on the
contrary, are existentially dependent on other spatial and temporal
orders. Yet because of the existence of this non-geographical and
atemporal horizon, the need -- as well as the possibility -- has arisen
to redefine the


war-torn
Syria is unreachably distant even for seasoned reporters and their
staff, veritable travel agencies are being set up in order to bring
Western jihadists there in large numbers.

Things are similar for the temporal dimensions of social and biological
processes. Permanent presence is a temporality that is inimical to life
but, under its influence, temporal rhythms have to be redefined as well.
What counts as fast? What counts as slow? In what order should things
proceed? On the everyday level, for instance, the matter can be as
simple as how quickly to respond to an email. Because the transmission
of information hardly takes any time, every delay is a purely social
creation. But how much is acceptable? There can be no uniform answer to
this. The members of each communal formation have to negotiate their own
rules with one another, even in areas of life that are otherwise highly
formalized. In an interview with the magazine *Zeit*, for instance, a
lawyer with expertise in labor law was asked whether a boss may require
employees to be reachable at all times. Instead of answering by
referring to any binding legal standards, the lawyer casually advised
that this was a matter of flexi


s are
simul­taneously voluntary and binding; they allow actors to meet
eye-to-eye instead of entering into hierarchical relations with one
another. If everyone voluntarily complies with the protocols, then it is
not necessary for one actor to give instructions to another. Whoever
accepts the relevant protocols can interact with others who do the same;
whoever opts not to []{#Page_96 type="pagebreak" title="96"}accept them
will remain on the outside. Protocols establish, for example, common
languages, technical standards, or social conventions. The fundamental
protocol for the internet is the Transmission Control Protocol/Internet
Protocol (TCP/IP). This suite of protocols defines the common language
for exchanging data. Every device that exchanges information over the
internet -- be it a smartphone, a supercomputer in a data center, or a
networked thermostat -- has to use these protocols. In growing areas of
social contexts, the common language is English. Whoever wishes to
belong has to speak it increasingly often. In the natural sciences,
communication now takes place almost exclusively in English. Non-native
speakers who accept this norm may pay a high price: they have to learn a
new


s may be blatant hype and
self-promotion but, as a general estimation, Hammond\'s assertion is not
entirely beyond belief. It remains to be seen whether algorithms will
replace or simply supplement traditional journalism. Yet because media
companies are now under strong financial pressure, it is certainly
reasonable to predict that many journalistic texts will be automated in
the future. Entirely different applications, however, have also been
conceived. Alexander Pschera, for instance, foresees a new age in the
relationship between humans and nature, for, as soon as animals are
equipped with transmitters and sensors and are thus able to tell their
own stories through the appropriate software, they will be regarded as
individuals and not merely as generic members of a
species.[^87^](#c2-note-0087){#c2-note-0087a}

We have not yet reached this point. However, given that the CIA has also
expressed interest in Narrative Science and has invested in it through
its venture-capital firm In-Q-Tel, there are indications that
applications are being developed beyond the field of journalism. For the
purpose of spreading propaganda, for instance, algorithms can easily be
used to create a flood of ent


naging immense and
unstructured amounts of data. On the other hand, these large amounts of
data and the computing centers in which they are stored and processed
provide the material precondition for developing increasingly complex
algorithms. Necessities and possibilities are mutually motivating one
another.[^98^](#c2-note-0098){#c2-note-0098a}

Perhaps the best-known algorithms that sort the digital infosphere and
make it usable in its present form are those of search engines, above
all Google\'s PageRank. Thanks to these, we can find our way around in a
world of unstructured information and transfer increasingly larger parts
of the (informational) world into the order of unstructuredness without
giving rise to the "Library of Babel." Here, "unstructured" means that
there is no prescribed order such as (to stick []{#Page_112
type="pagebreak" title="112"}with the image of the library) a cataloging
system that assigns to each book a specific place on a shelf. Rather,
the books are spread all over the place and are dynamically arranged,
each according to a search, so that the appropriate books for each
visitor are always standing ready at the entrance. Yet the metaphor of
books being strew


aluated with regard to the relation between
"information" and "the world," for instance with a qualitative criterion
such as "true"/"false." Rather, the sphere of information is treated as
a self-referential, closed world, and documents are accordingly only
evaluated in terms of their position within this world, though with
quantitative criteria such as "central"/"peripheral."

Even though the PageRank algorithm was highly effective and assisted
Google\'s rapid ascent to a market-leading position, at the beginning it
was still relatively simple and its mode of operation was at least
partially transparent. It followed the classical statistical model of an
algorithm. A document or site referred to by many links was considered
more important than one to which fewer links
referred.[^104^](#c2-note-0104){#c2-note-0104a} The algorithm analyzed
the given structural order of information and determined the position of
every document therein, and this was largely done independently of the
context of the search and without making any assumptions about it. This
approach functioned relatively well as long as the volume of information
did not exceed a certain size, and as long as the users and their
s


urant round the corner. Now,
thanks to smartphones, this is an obvious thing to do.
:::

::: {.section}
### Algorithm clouds {#c2-sec-0023}

In order to react to such changes in user behavior -- and simultaneously
to advance it further -- Google\'s search algorithm is constantly being
modified. It has become increasingly complex and has assimilated a
greater amount of contextual []{#Page_115 type="pagebreak"
title="115"}information, which influences the value of a site within
Page­Rank and thus the order of search results. The algorithm is no
longer a fixed object or unchanging recipe but is transforming into a
dynamic process, an opaque cloud composed of multiple interacting
algorithms that are continuously refined (between 500 and 600 times a
year, according to some estimates). These ongoing developments are so
extensive that, since 2003, several new versions of the algorithm cloud
have appeared each year with their own names. In 2014 alone, Google
carried out 13 large updates, more than ever
before.[^105^](#c2-note-0105){#c2-note-0105a}

These changes continue to bring about new levels of abstraction, so that
the algorithm takes into account add­itional variables such as the time
an


ere he or she will have to go
next. On the basis of real-time traffic data, it will then suggest the
optimal way to get there. For those driving cars, the amount of traffic
on the road will be part of the equation. This is ascertained by
analyzing the motion profiles of other drivers, which will allow the
program to determine whether the traffic is flowing or stuck in a jam.
If enough historical data is taken into account, the hope is that it
will be possible to redirect cars in such a way that traffic jams should
no longer occur.[^110^](#c2-note-0110){#c2-note-0110a} For those who use
public transport, Google Now evaluates real-time data about the
locations of various transport services. With this information, it will
suggest the optimal route and, depending on the calculated travel time,
it will send a reminder (sometimes earlier, sometimes later) when it is
time to go. That which Google is just experimenting with and testing in
a limited and unambiguous context is already part of Facebook\'s
everyday operations. With its EdgeRank algorithm, Facebook already
organizes everyone\'s newsfeed, entirely in the background and without
any explicit user interaction. On the basis of three variables -- user
affinity (previous interactions between two users), content weigh


to this
function or removed from it. All of this happens behind the user\'s back
and in accordance with the goals and pos­itions that are relevant to the
developers of a given algorithm, be it to optimize profit or
surveillance, create social norms, improve services, or whatever else.
The results generated in this way are sold to users as a personalized
and efficient service that provides a quasi-magical product. Out of the
enormous haystack of searchable information, results are generated that
are made to seem like the very needle that we have been looking for. At
best, it is only partially transparent how these results came about and
which positions in the world are strengthened or weakened by them. Yet,
as long as the needle is somewhat functional, most users are content,
and the algorithm registers this contentedness to validate itself. In
this dynamic world of unmanageable complexity, users are guided by a
sort of radical, short-term pragmatism. They are happy to have the world
pre-sorted for them in order to improve their activity in it. Regarding
the matter of whether the information being provided represents the
world accurately or not, they are unable to formulate an adequate
a


rentialism' in Contemporary Art,"
trans. Gerrit Jackson, in Dirk Snauwaert et al. (eds), *Rehabilitation:
The Legacy of the Modern Movement* (Ghent: MER, 2010), pp. 97--106, at
99.

[2](#c2-note-0002a){#c2-note-0002}  The recognizability of the sources
distinguishes these processes from plagiarism. The latter operates with
the complete opposite aim, namely that of borrowing sources without
acknow­ledging them.

[3](#c2-note-0003a){#c2-note-0003}  Ulf Poschardt, *DJ Culture* (London:
Quartet Books, 1998), p. 34.

[4](#c2-note-0004a){#c2-note-0004}  Theodor W. Adorno, *Aesthetic
Theory*, trans. Robert Hullot-Kentor (Minneapolis, MN: University of
Minnesota Press, 1997), p. 151.

[5](#c2-note-0005a){#c2-note-0005}  Peter Bürger, *Theory of the
Avant-Garde*, trans. Michael Shaw (Minneapolis, MN: University of
Minnesota Press, 1984).

[6](#c2-note-0006a){#c2-note-0006}  Felix Stalder, "Neun Thesen zur
Remix-Kultur," *i-rights.info* (May 25, 2009), online.

[7](#c2-note-0007a){#c2-note-0007}  Florian Cramer, *Exe.cut(up)able
Statements: Poetische Kalküle und Phantasmen des selbstausführenden
Texts* (Munich: Wilhelm Fink, 2011), pp. 9--10 \[--trans.\]

[8](#c2-note-0008a){#c2-note-0008}  McLuhan stressed that, despite using
the alphabet, every manuscript is unique because it not only depended on
the sequence of letters but also on the individual ab


ment demanding
that a new organization be established to license the digital rights of
out-of-print books. See "Authors Guild: Amazon was Google's Target,"
*The Authors Guild: Industry & Advocacy News* (April 11, 2014), online.
In October 2015, however, the next-highest authority -- the United
States Court of Appeals for the Second Circuit -- likewise decided in
Google\'s favor. The Authors Guild promptly announced its intention to
take the case to the Supreme Court.

[14](#c2-note-0014a){#c2-note-0014}  Jean-Noël Jeanneney, *Google and
the Myth of Universal Knowledge: A View from Europe*, trans. Teresa
Lavender Fagan (Chicago, IL: University of Chicago Press, 2007).

[15](#c2-note-0015a){#c2-note-0015}  Within the framework of the Images
for the Future project (2007--14), the Netherlands alone invested more
than €170 million to digitize the collections of the most important
audiovisual archives. Over 10 years, the cost of digitizing the entire
cultural heritage of Europe has been estimated to be around €100
billion. See Nick Poole, *The Cost of Digitising Europe\'s Cultural
Heritage: A Report for the Comité des Sages of the European Commission*
(November 2010), online.

[16](#


es
are possible. It seems credible, however, that the Pirate Bay was
attracting around a billion page views per month by the end of 2013.
That would make it the seventy-fourth most popular internet destination.
See Ernesto, "Top 10 Most Popular Torrent Sites of 2014" (January 4,
2014), online.

[20](#c2-note-0020a){#c2-note-0020}  See the documentary film *TPB AFK:
The Pirate Bay Away from Keyboard* (2013), directed by Simon Klose.

[21](#c2-note-0021a){#c2-note-0021}  In technical terms, there is hardly
any difference between a "stream" and a "download." In both cases, a
complete file is transferred to the user\'s computer and played.

[22](#c2-note-0022a){#c2-note-0022}  The practice is legal in Germany
but illegal in Austria, though digitized texts are routinely made
available there in seminars. See Seyavash Amini Khanimani and Nikolaus
Forgó, "Rechtsgutachten über die Erforderlichkeit einer freien
Werknutzung im österreichischen Urheberrecht zur Privilegierung
elektronisch unterstützter Lehre," *Forum Neue Medien Austria* (January
2011), online.

[23](#c2-note-0023a){#c2-note-0023}  Deutscher Bibliotheksverband,
"Digitalisierung" (2015), online \[--trans\].

[24](#c2-note


ger than
Fiction: Fan Identity in Cosplay," *Transformative Works and Cultures* 7
(2011), online.

[31](#c2-note-0031a){#c2-note-0031}  The *Oxford English Dictionary*
defines "selfie" as a "photographic self-portrait; *esp*. one taken with
a smartphone or webcam and shared via social media."

[32](#c2-note-0032a){#c2-note-0032}  Odin Kroeger et al. (eds),
*Geistiges Eigentum und Originalität: Zur Politik der Wissens- und
Kulturproduktion* (Vienna: Turia + Kant, 2011).

[33](#c2-note-0033a){#c2-note-0033}  Roland Barthes, "The Death of the
Author," in Barthes, *Image -- Music -- Text*, trans. Stephen Heath
(London: Fontana Press, 1977), pp. 142--8.

[34](#c2-note-0034a){#c2-note-0034}  Heinz Rölleke and Albert
Schindehütte, *Es war einmal: Die wahren Märchen der Brüder Grimm und
wer sie ihnen erzählte* (Frankfurt am Main: Eichborn, 2011); and Heiner
Boehncke, *Marie Hassenpflug: Eine Märchenerzählerin der Brüder Grimm*
(Darmstadt: Von Zabern, 2013).

[35](#c2-note-0035a){#c2-note-0035}  Hansjörg Ewert, "Alles nur
geklaut?", *Zeit Online* (February 26, 2013), online. This is not a new
realization but has long been a special area of research for
musicologists. What is n


surge in new members after
reunification. By 2010, both parties already had fewer members than
Greenpeace, whose 580,000 members make it Germany's largest NGO.
Parallel to this, between 1970 and 2010, the proportion of people
without any religious affiliations shrank to approximately 37 percent.
That there are more churches and political parties today is indicative
of how difficult []{#Page_188 type="pagebreak" title="188"}it has become
for any single organization to attract broad strata of society.

[39](#c2-note-0039a){#c2-note-0039}  Ulrich Beck, *Risk Society: Towards
a New Modernity*, trans. Mark Ritter (London: SAGE, 1992), p. 135.

[40](#c2-note-0040a){#c2-note-0040}  Ferdinand Tönnies, *Community and
Society*, trans. Charles P. Loomis (East Lansing: Michigan State
University Press, 1957).

[41](#c2-note-0041a){#c2-note-0041}  Karl Marx and Friedrich Engels,
"The Manifesto of the Communist Party (1848)," trans. Terrell Carver, in
*The Cambridge Companion to the Communist Manifesto*, ed. Carver and
James Farr (Cambridge: Cambridge University Press, 2015), pp. 237--60,
at 239. For Marx and Engels, this was -- like everything pertaining to
the dynamics of capitalism -- a thoroughly ambivalent development. For,
in this case, it finally forced people "to take a down-to-earth view of
their circumstances, their multifarious relationships" (ibid.).

[42](#c2-note-0042a){#c2-note-0042}  As early as the 1940s, Karl Polanyi
demonstrated in *The Great Transformation* (New York: Farrar & Rinehart,
1944) that the idea of strictly separated spheres, which are supposed to
be so typical of society, is in fact highly ideological. He argued above
all that the attempt to implement this separation fully and consistently
in the form of the free market would destroy the foundations of society
because both the life of workers and the environment of the market
itself would be regarded as externalities. For a recent adaptation of
this argument, see David Graeber, *Debt: The First 5000 Years* (New
York: Melville House, 2011).

[43](#c2-note-0043a){#c2-note-0043


llman,
*Networked: The New Social Operating System* (Cambridge, MA: MIT Press,
2012). The term is practical because it is easy to understand, but it is
also conceptually contradictory. An individual (an indivisible entity)
cannot be defined in terms of a distributed network. With a nod toward
Gilles Deleuze, the cumbersome but theoretically more precise term
"dividual" (the divisible) has also been used. See Gerald Raunig,
"Dividuen des Facebook: Das neue Begehren nach Selbstzerteilung," in
Oliver Leistert and Theo Röhle (eds), *Generation Facebook: Über das
Leben im Social Net* (Bielefeld: Transcript, 2011), pp. 145--59.

[53](#c2-note-0053a){#c2-note-0053}  Jariu Saramäki et al., "Persistence
of Social Signatures in Human Communication," *Proceedings of the
National Academy of Sciences of the United States of America* 111
(2014): 942--7.

[54](#c2-note-0054a){#c2-note-0054}  The term "weak ties" derives from a
study of where people find out information about new jobs. As the study
shows, this information does not usually come from close friends, whose
level of knowledge often does not differ much from that of the person
looking for a job, but rather from loose acquaintances, who


rlap much with one\'s own and who can therefore
make information available from outside of one\'s own network. See Mark
Granovetter, "The Strength of Weak Ties," *American Journal of
Sociology* 78 (1973): 1360--80.

[55](#c2-note-0055a){#c2-note-0055}  Castells, *The Power of Identity*,
420.

[56](#c2-note-0056a){#c2-note-0056}  Ulf Weigelt, "Darf der Chef
ständige Erreichbarkeit ver­langen?" *Zeit Online* (June 13, 2012),
online \[--trans.\].[]{#Page_190 type="pagebreak" title="190"}

[57](#c2-note-0057a){#c2-note-0057}  Hartmut Rosa, *Social Acceleration:
A New Theory of Modernity*, trans. Jonathan Trejo-Mathys (New York:
Columbia University Press, 2013).

[58](#c2-note-0058a){#c2-note-0058}  This technique -- "social freezing"
-- has already become so standard that it is now regarded as way to help
women achieve a better balance between work and family life. See Kolja
Rudzio "Social Freezing: Ein Kind von Apple," *Zeit Online* (November 6,
2014), online.

[59](#c2-note-0059a){#c2-note-0059}  See the film *Into Eternity*
(2009), directed by Michael Madsen.

[60](#c2-note-0060a){#c2-note-0060}  Thomas S. Kuhn, *The Structure of
Scientific Revolutions*, 3rd edn (Chicago, IL


al Dynamics of Globalization* (New Haven, CT: Yale University
Press, 2008).

[71](#c2-note-0071a){#c2-note-0071}  Ibid., p. 29.

[72](#c2-note-0072a){#c2-note-0072}  Niklas Luhmann, *Macht im System*
(Berlin: Suhrkamp, 2013), p. 52 \[--trans.\].

[73](#c2-note-0073a){#c2-note-0073}  Mathieu O\'Neil, *Cyberchiefs:
Autonomy and Authority in Online Tribes* (London: Pluto Press, 2009).

[74](#c2-note-0074a){#c2-note-0074}  Eric Steven Raymond, "The Cathedral
and the Bazaar," *First Monday* 3 (1998), online.

[75](#c2-note-0075a){#c2-note-0075}  Jorge Luis Borges, "The Library of
Babel," trans. Anthony Kerrigan, in Borges, *Ficciones* (New York: Grove
Weidenfeld, 1962), pp. 79--88.

[76](#c2-note-0076a){#c2-note-0076}  Heinrich Geiselberger and Tobias
Moorstedt (eds), *Big Data: Das neue Versprechen der Allwissenheit*
(Berlin: Suhrkamp, 2013).

[77](#c2-note-0077a){#c2-note-0077}  This is one of the central tenets
of science and technology studies. See, for instance, Geoffrey C. Bowker
and Susan Leigh Star, *Sorting Things Out: Classification and Its
Consequences* (Cambridge, MA: MIT Press, 1999).

[78](#c2-note-0078a){#c2-note-0078}  Sybille Krämer, *Symbolische
Maschinen: D


online.[]{#Page_192
type="pagebreak" title="192"}

[85](#c2-note-0085a){#c2-note-0085}  Miriam Meckel, *Next: Erinnerungen
an eine Zukunft ohne uns* (Reinbeck bei Hamburg: Rowohlt, 2011). One
could also say that this anxiety has been caused by the fact that the
automation of labor has begun to affect middle-class jobs as well.

[86](#c2-note-0086a){#c2-note-0086}  Steven Levy, "Can an Algorithm
Write a Better News Story than a Human Reporter?" *Wired* (April 24,
2012), online.

[87](#c2-note-0087a){#c2-note-0087}  Alexander Pschera, *Animal
Internet: Nature and the Digital Revolution*, trans. Elisabeth Laufer
(New York: New Vessel Press, 2016).

[88](#c2-note-0088a){#c2-note-0088}  The American intelligence services
are not unique in this regard. *Spiegel* has reported that, in Russia,
entire "bot armies" have been mobilized for the "propaganda battle."
Benjamin Bidder, "Nemzow-Mord: Die Propaganda der russischen Hardliner,"
*Spiegel Online* (February 28, 2015), online.

[89](#c2-note-0089a){#c2-note-0089}  Lennart Guldbrandsson, "Swedish
Wikipedia Surpasses 1 Million Articles with Aid of Article Creation
Bot," [blog.wikimedia.org](http://blog.wikimedia.org) (June 17, 2013),
o


w be
traffic-free.

[111](#c2-note-0111a){#c2-note-0111}  Pamela Vaughan, "Demystifying How
Facebook\'s EdgeRank Algorithm Works," *HubSpot* (April 23, 2013),
online.

[112](#c2-note-0112a){#c2-note-0112}  Lisa Gitelman (ed.), *"Raw Data"
Is an Oxymoron* (Cambridge, MA: MIT Press, 2013).

[113](#c2-note-0113a){#c2-note-0113}  The terms "raw," in the sense of
unprocessed, and "cooked," in the sense of processed, derive from the
anthropologist Claude Lévi-Strauss, who introduced them to clarify the
difference between nature and culture. See Claude Lévi-Strauss, *The Raw
and the Cooked*, trans. John Weightman and Doreen Weightman (Chicago,
IL: University of Chicago Press, 1983).

[114](#c2-note-0114a){#c2-note-0114}  Jessica Lee, "No. 1 Position in
Google Gets 33% of Search Traffic," *Search Engine Watch* (June 20,
2013), online.

[115](#c2-note-0115a){#c2-note-0115}  One estimate that continues to be
cited quite often is already obsolete: Michael K. Bergman, "White Paper
-- The Deep Web: Surfacing Hidden Value," *Journal of Electronic
Publishing* 7 (2001), online. The more content is dynamically generated
by databases, the more questionable such estimates become. It is
uncontes


then? For the most part, these are unsolved questions.
On the other hand, because of the blending of work and leisure already
mentioned, as well as the general economization of social activity (as
is happening on social []{#Page_126 type="pagebreak" title="126"}mass
media and in the creative economy, for instance), it is hardly possible
now to draw a line between production and reproduction. Thus, this set
of concepts, which is strictly oriented toward economic production
alone, is more problematic than ever. My decision to use these concepts
is therefore limited to clarifying the conceptual transition from the
previous chapter to the chapter at hand. The concern of the last chapter
was to explain the forms that cultural processes have adopted under the
present conditions -- ubiquitous telecommunication, general expressivity
(referentiality), flexible cooperation (communality), and informational
automation (algorithmicity). In what follows, on the contrary, my focus
will turn to the political dynamics that have emerged from the
realization of "productive forces" as concrete "relations of production"
or, in more general terms, as social relations. Without claiming to be
comprehensive, I


00 million users in 2014 -- dominate
the market. The gap has thus widened between user interfaces and the
processes that take place behind them on servers and in data centers,
and this has expanded what Crouch referred to as "the influence of the
privileged elite." In this case, the elite are the engineers and
managers employed by the large providers, and everyone else with access
to the underbelly of the infrastructure, including the British
Government Communications Headquarters (GCHQ) and the US National
Security Agency (NSA), both of which employ programs such as a MUSCULAR
to record data transfers between the computer centers operated by large
American providers.[^10^](#c3-note-0010){#c3-note-0010a}

Nevertheless, email essentially remains an open application, for the
SMTP protocol forces even the largest providers to cooperate. Small
providers are able to collaborate with the latter and establish new
services with them. And this creates options. Since Edward Snowden\'s
revelations, most people are aware that all of their online activities
are being monitored, and this has spurred new interest in secure email
services. In the meantime, there has been a whole series of projects
aimed


is no
longer free. The majority of American teens, for example, despite
[]{#Page_143 type="pagebreak" title="143"}no longer being very
enthusiastic about Facebook, continue using the network for fear of
missing out on something.[^40^](#c3-note-0040){#c3-note-0040a} This
contradiction -- voluntarily doing something that one does not really
want to do -- and the resulting experience of failing to shape one\'s
own activity in a coherent manner are ideal-typical manifestations of
the power of networks.

The problem experienced by the unwilling-willing users of Facebook has
not been caused by the transformation of communication into data as
such. This is necessary to provide input for algorithms, which turn the
flood of information into something usable. To this extent, the general
complaint about the domination of algorithms is off the mark. The
problem is not the algorithms themselves but rather the specific
capitalist and post-democratic setting in which they are implemented.
They only become an instrument of domin­ation when open and
decentralized activities are transferred into closed and centralized
structures in which far-reaching, fundamental decision-making powers and
possibilities for action are embedded that legitimize themselves purely
on the basis of their output. Or, to adapt the title of Rosa von
Praunheim\'s film, which I discussed in my first chapter: it is not the
algorithm that is perverse, but the situation in which it lives.
:::

::: {.section}
### Political surveillance {#c3-sec-0008}

In June 2013, Edward Snowden exposed an additional and especially
problematic aspect of the expansion of post-democratic structures: the
comprehensive surveillance of the internet by government intelligence
agencies. The latter do not use collected data primarily for commercial
ends (although they do engage in commercial espionage) but rather for
political repression and the protection of central power interests --
or, to put it in more neutral terms, in the service of general security.
Yet the NSA and other intelligence agencies also record decentralized
communication and transform it into (meta-)data, which are centrally
stored and analyzed.[^41^](#c3-note-0041){#c3-note-0041a} This process
is used to generate possible courses of action, from intensifying the
surveillance of individuals and manipulating their informational
environment[^42^](#c3-note-0042){#c3-note-0042a} to launching military
drones for the purpose of
assassination.[^43^](#c3-note-0043){#c3-note-0043a} The []{#Page_144
type="pagebreak" title="144"}great advantage of meta-data is that they
can be standardized and thus easily evaluated by machines. This is
especially important for intelligence agencie


under the name
Vitality. People insured in Germany, France, and Austria are supposed to
send their health information to the company and, as a reward for
leading a "proper" lifestyle, receive a rebate on their premium. The
long-term goal of the program is to develop "behavior-dependent tariff
models," which would undermine the solidarity model of health
insurance.[^55^](#c3-note-0055){#c3-note-0055a}

According to the legal scholar Frank Pasquale, the sum of all these
developments has led to a black-box society: More social processes are
being controlled by algorithms whose operations are not transparent
because they are shielded from the outside world and thus from
democratic control.[^56^](#c3-note-0056){#c3-note-0056a} This
ever-expanding "post-democracy" is not simply liberal democracy with a
few problems that can be eliminated through well-intentioned reforms.
Rather, a new social system has emerged in which allegedly relaxed
control over social activity is compensated for by a heightened level of
control over the data and structural conditions pertaining to the
activity itself. In this system, both the virtual and the physical world
are altered to achieve particular goals -- goals


based on a set of fundamental principles defined by the members
themselves. These are delineated in the Debian Social Contract, which
was first formulated in 1997 and subsequently revised in
2004.[^75^](#c3-note-0075){#c3-note-0075a} It stipulates that the
software has to remain "100% free" at all times, in the sense that the
software license guarantees the freedom of unlimited use, modification,
and distribution. The developers understand this primarily as an ethical
obligation. They explicitly regard the project as a contribution "to the
free software community." The social contract demands transparency on
the level of the program code: "We will keep our entire bug report
database open for public view at all times. Reports that people file
online will promptly become visible to others." There are both technical
and ethical considerations behind this. The contract makes no mention at
all of a classical production goal; there is no mention, for instance,
of competitive products or a schedule for future developments. To put it
in Colin Crouch\'s terms, input legitimation comes before output
legitimation. The initiators silently assume that the project\'s basic
ethical, technical, and soci


as \$500,000 a year. This money is
used, for instance, to pay the most important programmers and to
organize working groups, thus ensuring that the development and
distribution of Linux will continue on a long-term basis. The
[]{#Page_158 type="pagebreak" title="158"}businesses that finance the
Linux Foundation may be profit-oriented institutions, but the main work
of the developers -- the program code -- flows back into the common pool
of resources, which the explicitly non-profit Debian Project can then
use to compile its distribution. The freedoms guaranteed by the free
license render this transfer from commercial to non-commercial use not
only legally unproblematic but even desirable to the for-profit service
providers, as they themselves also need entire operating systems and not
just the kernel.

The Debian Project draws from this pool of resources and is at the same
time a part of it. Therefore others can use Debian\'s software code,
which happens to a large extent, for instance through other Linux
distributions. This is not understood as competition for market share
but rather as an expression of the community\'s vitality, which for
Debian represents a central and normative point


g could be
perpetuated in the long term. One of the first results of these
considerations was to develop, following the model of free software,
numerous licenses that were tailored to cultural
production.[^81^](#c3-note-0081){#c3-note-0081a} In the cultural
context, free licenses achieved widespread distribution after 2001 with
the arrival of Creative Commons (CC), a California-based foundation that
began to provide easily understandable and adaptable licensing kits and
to promote its services internationally through a network of partner
organizations. This set of licenses made it possible to transfer user
rights to the community (defined by the acceptance of the license\'s
terms and conditions) and thus to create a freely accessible pool of
cultural resources. Works published under a CC license can always be
consumed and distributed free of charge (though not necessarily freely).
Some versions of the license allow works to be altered; others permit
their commercial use; while some, in turn, only allow non-commercial use
and distribution. In comparison with free software licenses, this
greater emphasis on the rights of individual producers over those of the
community, whose freedoms of u


(by August 2015), versions have been made
available in 289 other languages, 48 of which have at least 100,000
entries. Both its successes -- its enormous breadth of up-to-date
content, along with its high level of acceptance and quality -- and its
failures, with its low percentage of women editors (around 10 percent),
exhausting discussions, complex rules, lack of young personnel, and
systematic attempts at manipulation, have been well documented because
Wikipedia also guarantees free access to the data generated by the
activities of users, and thus makes the development of the commons
fairly transparent for outsiders.[^84^](#c3-note-0084){#c3-note-0084a}

One of the most fundamental and complex decisions in the history of
Wikipedia was to change its license. The process behind this is
indicative of how thoroughly the community of a commons can be involved
in its decision-making. When Wikipedia was founded in 2001, there was no
established license for free cultural works. The best option available
was the GNU license for free documentation (GLFD), which had been
developed, however, for software documentation. In the following years,
the CC license became the standard, and this []{#Page_1


rocesses that would be
necessary to make their data available to begin with. But public
pressure has been mounting, not least through initiatives such as the
global Open Data Index, which compares countries according to the
accessibility of their information.[^94^](#c3-note-0094){#c3-note-0094a}
In Germany, the Digital Openness Index evaluates states and communities
in terms of open data, the use of open-source software, the availability
of open infrastructures (such as free internet access in public places),
open policies (the licensing of public information,
freedom-of-information laws, the transparency of budget planning, etc.),
and open education (freely accessible educational resources, for
instance).[^95^](#c3-note-0095){#c3-note-0095a} The results are rather
sobering. The Open Data Index has identified 10 []{#Page_169
type="pagebreak" title="169"}different datasets that ought to be open,
including election results, company registries, maps, and national
statistics. A study of 97 countries revealed that, by the middle of
2015, only 11 percent of these datasets were entirely freely accessible
and usable.

Although public institutions are generally slow and resistant in making
their


en though post-democracy wishes to abolish the political
itself and subordinate everything to a technocratic lack of
alternatives. The development of the commons, after all, has shown that
genuine, fundamental, and cutting-edge alternatives do indeed exist. The
contradictory nature of the present is keeping the future
open.[]{#Page_175 type="pagebreak" title="175"}
:::

::: {.section .notesSet type="rearnotes"}
[]{#notesSet}Notes {#c3-ntgp-9999}
------------------

::: {.section .notesList}
[1](#c3-note-0001a){#c3-note-0001}  Karl Marx, *A Contribution to the
Critique of Political Economy*, trans. S. W. Ryazanskaya (London:
Lawrence and Wishart, 1971), p. 21.[]{#Page_196 type="pagebreak"
title="196"}

[2](#c3-note-0002a){#c3-note-0002}  See, for instance, Tomasz Konicz and
Florian Rötzer (eds), *Aufbruch ins Ungewisse: Auf der Suche nach
Alternativen zur kapitalistischen Dauerkrise* (Hanover: Heise
Zeitschriften Verlag, 2014).

[3](#c3-note-0003a){#c3-note-0003}  Jacques Rancière, *Disagreement:
Politics and Philosophy*, trans. Julie Rose (Minneapolis, MN: University
of Minnesota Press, 1999), p. 102 (the emphasis is original).

[4](#c3-note-0004a){#c3-note-0004}  Colin Crouch, *Post-Democracy*
(Cambridge: Polity, 2004), p. 4.

[5](#c3-note-0005a){#c3-note-0005}  Ibid., p. 6.

[6](#c3-note-0006a){#c3-note-0006}  Ibid., p. 96.

[7](#c3-note-0007a){#c3-note-0007}  These questions have already been
discussed at length, for instance in a special issue of the journal
*Neue Soziale Be­wegungen* (vol. 4, 2006) and in the first two issues of
the journal *Aus Politik und Zeitgeschichte* (2011).

[8](#c3-note-0008a){#c3-note-0008}  See Jonathan B. Postel, "RFC 821,
Simple Mail Transfer Protocol," *Information Sciences Institute:
University of Southern California* (August 1982), online: "An important
feature of SMTP is its capability to relay mail across transport service
environments."

[9](#c3-note-0009a){#c3-note-0009}  One of the first providers of
Webmail was Hotmail, which became available in 1996. Just one year
later, the company was purchased by Microsoft.

[10](#c3-note-0010a){#c3-note-0010}  Barton Gellmann and Ashkan Soltani,
"NSA Infiltrates Links to Yahoo, Google Data Centers Worldwide, Snowden
Documents Say," *Washington Post* (October 30, 2013), online.

[11](#c3-note-0011a){#c3-note-0011}  Initiated by hackers and activists,
the Mailpile project raised more than \$160,000 in September 2013 (the
fundraising goal had been just \$


  Wolfie Christl, "Kommerzielle
digitale Überwachung im Alltag," *Studie im Auftrag der
Bundesarbeitskammer* (November 2014), online.

[20](#c3-note-0020a){#c3-note-0020}  Viktor Mayer-Schönberger and
Kenneth Cukier, *Big Data: A Revolution That Will Change How We Live,
Work and Think* (Boston, MA: Houghton Mifflin Harcourt, 2013).

[21](#c3-note-0021a){#c3-note-0021}  Carlos Diuk, "The Formation of
Love," *Facebook Data Science Blog* (February 14, 2014), online.

[22](#c3-note-0022a){#c3-note-0022}  Facebook could have determined this
simply by examining the location data that were transmitted by its own
smartphone app. The study in question, however, did not take such
information into account.

[23](#c3-note-0023a){#c3-note-0023}  Dan Lyons, "A Lot of Top
Journalists Don\'t Look at Traffic Numbers: Here\'s Why," *Huffington
Post* (March 27, 2014), online.

[24](#c3-note-0024a){#c3-note-0024}  Adam Kramer et al., "Experimental
Evidence of Massive-Scale Emotional Contagion through Social Networks,"
*Proceedings of the National Academy of Sciences* 111 (2014): 8788--90.

[25](#c3-note-0025a){#c3-note-0025}  In all of these studies, it was
presupposed that users present the


2014), online.[]{#Page_204 type="pagebreak"
title="204"}
:::
:::

[Copyright page]{.chapterTitle} {#ffirs03}

=
::: {.section}
First published in German as *Kultur der Digitalitaet* © Suhrkamp Verlag,
Berlin, 2016

This English edition © Polity Press, 2018

Polity Press

65 Bridge Street

Cambridge CB2 1UR, UK

Polity Press

101 Station Landing

Suite 300

Medford, MA 02155, USA

All rights reserved. Except for the quotation of short passages for the
purpose of criticism and review, no part of this publication may be
reproduced, stored in a retrieval system or transmitted, in any form or
by any means, electronic, mechanical, photocopying, recording or
otherwise, without the prior permission of the publisher.

P. 51, Brautigan, Richard: From "All Watched Over by Machines of Loving
Grace" by Richard Brautigan. Copyright © 1967 by Richard Brautigan,
renewed 1995 by Ianthe Brautigan Swenson. Reprinted with the permission
of the Estate of Richard Brautigan; all rights reserved.

ISBN-13: 978-1-5095-1959-0

ISBN-13: 978-1-5095-1960-6 (pb)

A catalogue record for this book is available from the British Library.

Library of Congress Cataloging-in-Publication Dat


trans in Constant 2015


raphy, layout and image processing that stretch out
over a period of more than eight years. The questions and answers were recorded in the margins of events such as the yearly
Libre Graphics Meeting, the Libre Graphics Research Unit,
a two-year collaboration between Medialab Prado in Madrid,
Worm in Rotterdam, Piksel in Bergen and Constant in Brussels,
or as part of documenting the work process of the Brussels’
design team OSP. Participants in these intersecting events and
organisations constitute the various instances of ‘we’ and ‘I’ that
you will discover throughout this book.
The transcriptions are loosely organised around three themes:
tools, communities and design. At the same time, I invite you
to read Conversations as a chronology of growing up in Libre
Graphics, a portrait of a community gradually grasping the interdependencies between Free Software and design practice.
Femke Snelting
Brussels, December 2014

Introduction

A user should not be able to shoot himself in the foot

I think the ideas behind it are beautiful in my mind

We will get to know the machine and we will understand
ConTeXt and the ballistics of design
Meaningful transformations

Tools for a Read Write World
Etat des Lieux

Distributed Version Control

Even when you are done, you are not done
Having the tools is just the beginning
Data analysis as a discourse

Why you should own the beer company you design for
Just Ask and That Will Be That
Tying the story to data
Unicodes

If the design thinking is correct, the tools should be irrelevant
You need to copy to understand
What’s the thinking here

The construction of a book (Aether9)
Performing Libre Graphics

The Making of Conversations

7
13
23
37
47
71
84
99
109
135
155
171
187
201
213
261
275
287
297
311


ss-art

8

Larisa Blazic:

Introduction

F/LOSS is not a ghetto for idealists and techno fetishists – it was ready for
an average user, it was ready for a specialist user, it was ready for all and
what is most important the communication lines were open. Given that
Linux distributions extend the life of a computer by at least ten years, in
combination with the likes of Libre Graphics, Open Video and a plethora
of other F/LOS software, the benefits are manyfold, important for all and
not to be ignored by any form of creative practice worldwide.

Libre Graphics seems to offer a very exciting transformation of graphic design practice through implementation of F/LOS software development and
production processes. A hybridisation across these often separated fields of
practice that take under consideration openness and freedom to create, copy,
manipulate and distribute, while contributing to the development of visual
communication itself. All this may lease a new life to an over-commercialised
graphic design practice, banalised by mainstream culture.
This book brings together reflections on collaboration and co-creation
in graphic design, typography and desktop publishing, but also on gende


the Libre Graphics community. It offers a paradigm
shift, supported by historical research into graphic and type design practice,
that creates strong arguments to re-engage with the tools of production.
The conversations conducted give an overview of a variety of practices and
experiences which show the need for more conversations and which can help
educate designers and developers alike. It gives detailed descriptions of the
design processes, productions and potential trade-offs when engaged in software design and development while producing designed artefacts. It points
to the importance of transparent software development, breaking stereotypes and establishing a new image of the designer-developer combo, a fresh
perspective of mutual respect between disciplines and a desire to engage in
exchange of knowledge that is beneficial beyond what any proprietary software could ever be.
Larisa Blazic is a media artist living and working in London. Her interests range from

creative collaborations to intersections between video art and architecture. As senior lecturer
at the Faculty of Media, Arts and Design of the University of Westminster, she is currently
developing a master’s program on F


ful method that I’d been using up until then.
And that I found, quite fascinating.
And once I’d done that, it opened up all kinds of little things that I could
change that made the font editor itself bettitor. Better. Bettitor?

(laughs) That’s almost Dutch.

And so after I’d done that the display I talked about which could show a
word — I realized that I should redo that to take advantage of what I had
done. And so I redid that, and it’s now, it’s now much more usable. It now
shows — at least I hope it shows — more of what people want to see when
they are working with these transformations that apply to the font, there’s
now a list of the various transformations, that can be enabled at any time
and then it goes through and does them — whereas before it just sort of —
31

well it did kerning, and if you asked it to it would substitute this glyph so
you could see what it would look like — but it was all sort of — half-baked.
It wasn’t very elegant.
And — it’s much better now, and I’m quite proud of that.
It may crash — but it’s much better.

So you bring up half-baked, and when we met we talked about bread
baking.

Oh, yes.

And the pleasure of handling a material when you know it well. Maybe
make reliable bread — meanin


planning to do with the software. If my generative
typesetting platform ... you know ... works and is actually feasible, which is
maybe a 80% job.

Wait a second. You are busy developing another platform in parallel?

Yes, although I’m kind of hovering over it or sort of superceeding it as
an interface. You have LaTeX, which has been at version 2e since the
mid-nineties, LaTeX 3 is sort of this dim point on the horizon. Whereas
ConTeXt is changing every week. It’s converting the entire structure of this
macro package from being written in TeX to being written in Lua. And
so there is this transition from what could be best described as an archaic
approach to programming, to this shiny new piece of software. I see it as
being competitive strictly because it has so much configurability. But that’s
sort of ... and that’s the double edged sword of it, that the configuration
is useless without the documentation. Donald Knuth is famous for saying
that he realises he would have to write the software and the manual for the
software himself. And I remember in our first conversation about the sort
of paternalistic culture these typographic projects seem to have. Or at least
in the sense of


in HTML and CSS, but it’s not meant to be
... as problematic as that. I’m not sure if that is a real goal, or if that goal
is feasible or not. But it’s not meant to be drawing an artificial line, it’s just
meant to make things easier.

So by pulling apart historically grown elements, it becomes ... possibly modern?
Hypermodern?

Something for now and later.

Yes. Part of this idea, the trick ... This software is called ‘Subtext’ and at
this point it’s a conceptual project, but that will change pretty soon. Its
trick is this idea of separation instead of form and content, it’s translation
and effect. The parser itself has to be mutable, has to be able to pull in
the interface, print like decorations basically from a YAML configuration
file or some sort of equivalent. One of this configuration mechanisms that
was designed to be human readable and not machine readable. Like, well
both, striking that balance. Maybe we can get to that kind of ... talking
about agency a little bit. Its trick to really pull that out so that if you want
to ... for instance now in markdown if you have quotes it will be translated
in ConTeXt into \quotation. In ConTeXt that’s a very simple switch
to turn it into German quotes. Or I guess that’s more like international
quotes, everything not English. For the purposes of markdown there is
no, like really easy way, to change that part of the interface. So that when
53

I’m writing, when I use the angle brackets as a quote it would turn into
a \quotation in the output. Whereas with ‘Subtext’ you would just go
into the interface type like configuration and say: These are converted into
a quote basically. And then the effects are listed in other configuration files
so that the effects of quotes in HTML can be ...
... different.

Yes. Maybe have specific CSS properties for spacing, that kind of stuff. And
then in ConTeXt the same sort of ... both the environmental setup as well
as the raw ‘what is put into the document when it’s translated’. This kind of
separation ... you know at that point if both those effects are already the way
that you want them, then all you have to do is change the interface. And
then later on typesetting system, maybe iTeX comes out, you know, Knuth’s
joke, anyway. 6 That kind of separation seems to imply a future proofing
that I find very elegant. That you can just add later on the effects that you
need for a different system. Or a different version of a system, not that you
have to learn ‘mark 6’, or something like that ...
Back to the future ... I wonder about ConTeXt being bound to a pa


Verbindingen/Jonctions: Tracks in electr(on)ic fields. Constant Verlag, 2009.
http://ospublish.constantvzw.org/sources/vj10

55

take it to the next level. How do I turn a lyric sheet from something that
is sort of static to ... you know ... two pages that are like put directly on the
screen next to each other. Like a screen based system where it’s animated
to the point ... and this is what we actually started to karaoke last night ...
so you have an English version and a Spanish version – for instance in the
case of the music that I’ve been doing. And we can animate. We can have
timed transitions so you can have a ‘current lyric indicator’ move down the
page. That kind of use case is not something that Pragma 8 is ever going
to run into. But as soon as it is done and documented then what’s the next
thing, what kind of animations are gonna be ... or what kind of ... once that
possibility is made real or concrete ... you know, so I kind of see it as a very
iterative process at this point. I don’t have any kind of grand scheme other
than ‘Subtext’ kind of replacing Microsoft Word as the dominant academic
publishing platform, I think. (laughs)

Just take over the world.


ppable is the backend. How much can we
go in and kind of ... you know. And it its an inspirational question to me,
because now I’m trying to envision a different page. And I’m really curious
about that. But I think that ConTeXt itself will likely be pretty stable in its
scope ... in that way of being ... sort of ... deterministic in its expectations.
But where that leaves us as users ... first I’d be really surprised if the engine
itself, if LuaTeX was not being some way written to ... I feel really ignorant
about this, I wish I just knew. But, yeah, there must be ... There is no way
to translate this into a modern programming language without somehow
thinking about this in terms of the design. I guess to certain extent the
answer to your question is dependent on the conscientiousness of Taco and
the other LuaTex developers for this kind of modularity. But I don’t ... you
know ... I’m actually feeling very imaginatively lacking in terms of trying to
understand what you’re award-winning book did not accomplish for you ...
Yeah, what’s wrong with that?

I think it would be good to talk with Pierre, not Pierre Marchand but Pierre ...
... Huggybear.

Yeah. We have been talking


nce I have a sort of sense of
what I want to do, I can figure it out. Right now you’re sort of in the dark about
the endless possibilities ...

Its existence is very opaque in some ways. The way that it’s implemented,
like everything about it is sort of ... looking at the macros that they wrote,
the macros that you invoke ... like ... that takes ... flow control in TeX is like
... I mean you might as well write it in Bash or ... I mean I think Bash would
even be more sensible to figuring out what’s going on. So, the switch to Lua
there is kind of I think a useful step just in being more transparent. To allow
you to get into becoming more intimate with the source or the operation
59

of the system ... you know ... without having to go ... I mean I guess ... the
TeX Book would still be useful in some ways but that’s ... I mean ... to go
back and learn TeX when you’re just trying to use ConTeXt is sort of ...
it’s not ... I’m not saying it’s, you know ... it’s a proper assumption to say oh
yeah, don’t worry about the rules and the way TeX is organised but you’re not
writing your documents in ConTeXt the way you would write them if you’re
using plain TeX. I mean that


y difficult to do in this sort of aiming sense. I mean it’s fun to do but
it’s a strange kind of posters you get.

You can’t fit it all in your head at once. It’s not possible.
No. So it’s okay to have a bit of delay.

I wondered to what extent, if it were updated in real time, all the changes
you’re making in the code, if compilation was instantaneous, how that would
affect the experience. I guess it would still have this ballistic aspect, because
what you are doing is ... and that’s really the side of the metaphor ... or
a metaphorical difference between the two. One is like a translation. The
metaphor of ok this code means this effect ... That’s very different from picking
a brush and choosing the width of the stroke. It’s like when you initialise
a brush in code, set the brush width and then move it in a circle with a
radius of x. It’s different than taking the brush in Scribus or in whatever
WYSIWYG tool you are gonna use. There is something intrinsically different about a translation from primitives to visual effect than this kind of
metaphorical translation of an interaction between a human and a canvas ...
kind of put into software terms.

But there is a translation from me, the human, to the machine, to my human eye
again, which is hard to grasp. Without wanting it to be made invisible somehow.
65

Or to assume that it is not there. This would be my dream tool that would
allow you to sense that kind of translation without losing the ... canvasness of the
canvas. Because it’s frustrating that the canvas has to not speak of itself to be able
to work. That’s a very sad future for the canvas, I think.

I agree.

But when it speaks of itself it’s usually seen as buggy or it doesn’t work. So that’s
also not fair to the canvas. But there is something in drawing digitally, which
is such a weird thing to do actually, and this is interesting in this sort of cyborgs
we’re becoming, which is all about forgetting about the machine and not feeling
what you do. And it’s completely a different world


our surface is still unfoldable into something flat. It is
very easy to get away from flat surfaces in Blender. Once I have the molded
shape, I can export that into an .off file which my unwrapper can import
and that I can then unwrap into the sleeves and the front and the back as
well as project a panoramic image onto those pieces. Once I have that, it
becomes a pattern laid out on a giant flat surface. Then I can use Laidout
once again to tile pages across that. I can export into a .pdf with all the
individual pieces of the image that were just pieces of the larger image that
I can print on transfer paper. It took forty iron-on transfer papers I ironed
with an iron provided to me by the people sitting in front of me so that
took a while but finally I got it all done, cut it all out, sewed it up and there
you go.
Could you say something about your interest in moving from 2D to 3D
and back again? It seems everything you do is related to that?
I don’t know. I’ve been making sculpture of various kinds for quite a
long time. I’ve always drawn. Since I was about eighteen, I started making
sculptures, mainly mathematical woodwork. I don’t quite have access to a
full woodwork workshop anymore, so I cannot make as much wood


cally create, for instance a T-shirt or a ball, or other paper shapes.
Is there ever any work that stays in the computer, or does it always need
to become physical?

Usually, for me, it is important to make something that I can actually
physically interact with. The computer I usually find quite limiting. You
can do amazing things with computers, you can pan around an image, that
in itself is pretty amazing but in the end I get more out of interacting with
things physically than just in the computer.
But with Laidout, you have moved folding into the computer! Do you
enjoy that kind of reverse transformation?

It is a challenge to do and I enjoy figuring out how to do that. In making
computer tools, I always try to make something that I can not do nearly as
quickly by hand. It’s just much easier to do in a computer. Or in the case
of spherical images, it’s practically impossible to do it outside the computer.
I could paint it with airbrushes and stuff like that but that in itself would
take a hundred times longer than just pressing a couple of commands and
having the computer do it all automatically.

My feeling about your work is that the time you spent working on the
program is in i


ng or the tools
are in service to those things. That’s how I think of it. I can see that ...
I’ve distributed Laidout as something in itself. It’s not just some secret tool
that I’ve put aside and presented only the artwork. I do enjoy the tools
themselves.
I have a question about how the 2D imagines 3D. I’ve seen Pierre and
Ludi write imposition plans. I really enjoy reading this, almost as a sort of
poetry, about what it would be to be folded, to be bound like a book. Why is
it so interesting for you, this tension between the two dimensions?
I don’t know. Perhaps it’s just the transformation of materials from
something more amorphous into something that’s more meaningful, somehow. Like in a book, you start out with wood pulp, and you can lay it out in
pages and you have to do something to that in order to instil more meaning
to it.
Is binding in any way important to you?
Somewhat. I’ve bound a few things by hand. Most of my cartoon books
ended up being just stapled, like a stack of paper, staple in the middle and
fold. Very simple. I’ve done some where you cut down the middle and lay
the sides on top and they’re perfect bound. I’ve done just a couple where
it’



very related to your drawings. There’s a distinct quality. I was wondering
how you feel about that, how the interaction with the program relates to the
drawings themselves.

I think it just comes back to being very visually oriented. If you have to
enter a lot of values in a bunch of slots in a table, that’s not really a visual
way to do it. Especially in my artwork, it’s totally visual. There’s no other
component to it. You draw things on the page and it shows up immediately.
78

It’s just very visual. Or if you make a sculpture, you start with this chunk
of stuff and you have to transform it in some way and chop off this or sand
that. It’s still all very visual. When you sit down at a computer, computers
are very powerful, but what I want to do is still very visually oriented. The
question then becomes: how do you make an interface that retains the visual
inputs, but that is restricted to the types of inputs computers need to have
to talk to them?
The way someone sets up his workshop says a lot about his work. The way
you made Laidout and how you set up its screen, it’s important to define a spot
in the space of the possible.

What is nice is that you made the visualisa


of the things we talk about, are putting things from
the real world into the computer. But you are putting things from the computer
into the real world.
Can you describe again these two types of imposition, the first one being
very familiar to us. It must be the most frequently asked question on the
Scribus mailing list: How to do imposition. Even the most popular search
term on the OSP website is ‘Bookletprinting’. But what is the difference with
the plan for a 3D object? A classic imposition plan is also somehow about
turning a flat surface into a three dimensional object?
It is almost translatable. I’m reworking the 3D version to be able to
incorporate the flat folding. It is not quite there yet, the problem is the
connection between the pages. Currently, in the 3D version, you have a
shape that has a definitive form and that controls how things bleed across
the edges. When you have a piece of paper for a normal imposition, the
pages that are next to each other in the physical form are not necessarily
related to each other at all in the actual piece of paper. Right now, the piece
of paper you use for the 3D model is very defined, there is no flexibility.
Give me a few months!
S


te as complicated. You take a piece
of paper, cut out a square and another square, and than you can fold it and
you end up with a square that is actually made up of four different sections.
Than you can take the middle section, and you get another page and you can
81

keep folding in strange ways and you get different pages. Now the question
becomes: how do you define that page, that is a collection of four different
chunks of paper? I’m working on that!
We talk about the move from 2D to 3D as if these pages are empty. But
you actually project images on them and I keep thinking about maps, transitional objects where physical space is projected on paper which then becomes a
second real space and so on. Are you at all interested in maps?
A little bit. I don’t really want to because it is such a well-explored
field already. Already for many hundreds of years the problem is how do
you represent a globe onto a more or less two dimensional surface. You
have to figure out a way to make globe gores or other ways to project it and
than glue it on to a ball for example. There is a lot of work done with that
particular sort of imagery, but I don’t know.
Too many people in the field!

Yes. On


Graphics Research Unit and
of course Relearn.
I hope you are also interested in this, and able to make time for it. I
would imagine a more or less structured session of around two hours
with at least four of you participating, and I will prepare questions
(and cake).
Speak soon!
xF

109

How do you usually explain Git to design students?
Before using Git, I would work on a document. Let’s say a layout, and to
keep a trace of the different versions of the layout, I would append _01, _02
to the files. That’s in a way already versioning. What Git does, is that it
makes that process somehow transparent in the sense that, it takes care of
it for you. Or better, you have to make it take care for you. So instead of
having all files visible in your working directory, you put them in a database,
so you can go back to them later on. And then you have some commands to
manipulate this history. To show, to comment, to revert to specific versions.
More than versioning your own files, it is a tool to synchronize your work
with others. It allows you to work on the same projects together, to drive
parallel projects.
It really is a tool to make collaboration easier. It allows you to see differences.


a document. In a way it works more like
Dropbox. Every time you save it’s synchronized with the server directly.

So you need to find a balance between the very conscious commits you
make with Git and the fluidness of Etherpad, where the granularity is
much finer. Sparkleshare would be in between?
I think it would be interesting to have this kind of Sparkleshare behaviour, but
only when you want to work synchronously.

So you could switch in and out of different modes?

Usually Sparkleshare is used for people who don’t want to get to much involved
in Git and its commands. So it is really transparent: I send my files, it’s synchronized. I think it was really made for this kind of Dropbox behaviour. I think
it would make sense only when you want to have your hands on the process. To
have this available only when you decide, OK I go synchronous. Like you say,
if you have a commit for every letter it doesn’t make sense.
It makes sense. A lot of things related to versions in software development
is meant to track bugs, to track programming choices.
118

I don’t know for you ... but the way I interact with our Git repository since we
started to work with it ... I almost never went


ing to me.

It has been appropriated by designers as something they want. That’s why it’s
interesting to look at the Web Standards Project where designers really fight
for a separation of content and form. I think that this is somehow making
the work of designers quite ... boring. Could you talk a bit about how this is
done?
It’s a continuum. You can’t say that something is exactly form or exactly
presentation because there are gradations. If you take a table, you’ve already
decided that you want to display the material in a tabular way. If it’s a real
table, you should be able to transpose it. If you take the rows and columns,
and the numbers in the middle then it should still work. If you’ve got
‘sales’ here and if you’ve got ‘regions’ there, then you should still be able to
transpose that table. If you’re just flipping it 90 degrees then you are using
it as a layout grid, and not as a table. That’s one obvious thing. Even then,
deciding to display it as a tabular thing means that it probably came from a
much bigger dataset, and you’ve just chosen to sum all of the sales data over
147

one year. Another one: you have again the sales data, you could have i


ifferently. You might have a different font, you might have a
different way of doing it, you might use letter-spacing, etc. Whereas if you
tag that in as italics then you’ve only got italics, right? It’s a simple example
but at the end of the day you’re going to have to decide how that is displayed.
You mentioned print. In print no one sees the intermediate result. You see
ink on paper. If I have some Greek in there and if I’ve done that by actually
typing in Latin letters on the keyboard and putting a Greek font on it and
out comes Greek, nobody knows. If it’s a book that’s being translated, there
might be some problems. The more you’re shipping the electronic version
around, the more it actually matters that you put in the Greek letters as
148

Greek because you will want to revise it. It matters that you have flowing
text rather than text that has been hand-ragged because when you put in
the revisions you’re going to have to re-rag the entire thing or you can just
say re-flow and fix it up later. Things like that.

The idea of time, and the question of delay is interesting. Not how, but when you
enter to fine-tune things manually. As a designer of books, you’re alwa


st time we met, you summarized in a very relevant way the
history of font design software which is a proof by itself that everything is related
with fonts and this kind of small networks and I would like you to summarize it
again.
H

L

a

u

g

h

i

n

g

Alright. In that whole journey of getting into this area of parallel publishing and automated design, I was asking around for people who
worked in that area because at that time not many people had worked in
parallel publishing. It’s a lot of a bigger deal now, especially in the Free
Software community where we have Free Software manuals translated into
DC

157

many languages, written in .doc and .xml and then transformed into print
and web versions and other versions. But back then this was kind of a new
concept, not all people worked on it. And so, asking around, I heard about
the department of typography at the university of Reading. One of the lecturers there, actually the lecturer of the typeface design course put me on
to a designer in Holland, Petr van Blokland. He’s a really nice guy, really
friendly. And I dropped him an e-mail as I was in Holland that year – just
dropped by to see him and it turned out he’s not only involved in parallel
publishing and automated design, but also in typedesi


, and they give more of a plan, This
is what I expect to be doing. So, I think it has been interesting to see how
people have adopted that and what’s nice about it, is that it adds a really nice
human element to all this empirical data.
I wanted to ask you about the data, without getting too technical, could
you explain how these data are structured, what do the log files look like?

So the log files are all in XML, and generally we compress them, because
they can get rather large. And the reason that they are rather large is that we
are very verbose in our logging. We want to be completely transparent with
respect to everything, so that if you have some doubts or if you have some
questions about what kind of data has been collected, you should be able to
look at the log file, and figure out a lot about what that data is. That’s how
we designed the XML log files, and it was really driven by privacy concerns
and by the desire to be transparent and open. On the server side we take
that log file and we parse it out, and then we throw it into a database, so
that we can query the data set.
Now we are talking about privacy ... I was impressed by the work you have done
on this; the project is unusually clear about why certain things are logged, and
other things not; mainly to prevent the possibility of ‘playing back’ actions so that
one could identify individual users from the data set. So, while I understand
there are privacy issues at stake I was wondering ... what if you could look at the
collected data as a kind of scripting


eography that might
be replayed later?
Yes, we have been fairly conservative with the type of information that we
collect, because this really is the first instance where anyone has captured
such rich data about how people are using software on a day to day basis,
and then made it all that data publicly available. When a company does
175

this, they will keep the data internally, so you don’t have this risk of someone outside figuring something out about a user that wasn’t intended to be
discovered. We have to deal with that risk, because we are trying to go about
this in a very open and transparent way, which means that people may be
able to subject our data to analysis or data mining techniques that we haven’t
thought of and extract information that we didn’t intent to be recording in
our file, but which is still there. So there are fairly sophisticated techniques
where you can do things like look at audio recordings of typing and the timings between keystrokes, and then work backwards with the sounds made
to figure out the keys that people are likely pressing. So, just with keyboard
audio and keystroke timings alone you can often give enough information
to be able to reconstr


om number. So, from log to log then, we can track if
people use the same image names, but we have no idea of what the original
string was. There are these little ‘gotchas’, things to look out for, that I
don’t think most people are aware of, and this is why I get really concerned
about instrumentation efforts right now, because there isn’t this body of
experience of what kind of data should we collect, and what shouldn’t we
collect.
178

As we are talking about this, I am already more aware of what data I would allow
to be collected. Do you think by opening up this data set and the transparent
process of collecting and not collecting, this will help educate users about these
kinds of risks?
It might, but honestly I think probably the thing that will educate people
the most is if there was a really large privacy error and that it got a lot of
news, because then people would become more aware of it because right
now – and this is not to say that we want that to happen with ingimp – but
when we bring people in and we ask them about privacy, Are you concerned
about privacy?, and they say No, and we say Why? Well, they inherently trust
us, but the fact is that Open Source also


y. Because the whole point
of a creative project is that you’re doing something that hasn’t been done
before. And we have all struggled with this before. There’s two things you
don’t know at the beginning of a contract. The first is how long it will
take and the second is what the criteria of being finished will be. You don’t
know either of those two things, and, since you don’t, determining the value
upfront of that is a complete guess. Which means that, when you agree to a
fixed-price term, you are agreeing to take on yourself the risk of the delivery
of the project. So it’s a transfer of risks. Of course the people that are buying
your labour as commodity want to put that risk back on you. They don’t
want to take the risk so they make you do that, because they can’t answer
the question of how much does it cost and how long it will take. They want
a guarantee of a fixed price and they want you to take all the risk. Which is
very unfair because it’s their product in the end; the end product is owned
by them and not by you. It’s a very exploitative relationship to force you
to take the risk for capitalizing their product. It’s a bad relationship from
the beginning


, the people that develop Scribus would have to own a magazine
that is enabled by Scribus. And if they can own the magazine that Scribus
enables then they can capture enough of that value to fund the development
of Scribus, and it would actually develop very quickly and be very good,
because that’s actually a total system. So right from the software to the
design, to the journalism, to the editing, to the sale, to the capture of the
value of the end consumer. But because it doesn’t do that, they’re giving
Free Software away ... To who? Where is the value captured? Where is the
use value transferred into exchange value? It’s this point that you have to get
all the way to, and if you don’t make it all the way there, even if you stop a
mile short, in that mile all of the surplus value will be sucked out.

197

This conversation took place in Montreal at the last day of
the Libre Graphics Meeting 2011. In the panel How to
keep and make productive libre graphics projects?, Asheesh
had responded rather sharply to a remark from the audience that only a very small number of women were
present at LGM: Bringing the problem back to gender is
avoiding the general problem that F/LOSS ha


natural overlap.
ER

FS

What connects the two?

It is really about the idea of hacking. The first assignment in the
class is not to make anything, but simply to identify systems in the city.
What are elements that repeat. Trying to find which ones you can slip
into. It has been happening in graffiti forever. Graffiti in New York in
the eighties was to me a hack, a way to have giant paintings circulating in
the city ... There is a lot of room to explore there.
ER

Your experience with the Blender community 14 did not sound like an
easy bridge?

FS

Recently I released a piece of software that translates a .gml file and
translates it into a .stl file, which is a common 3D format. So you can
basically take a graffiti gesture and import it into software like Blender.
I used Blender because I wanted to highlight this tool, because I want
these communities to talk to each other.
So I was taking a tag that was created in the streets of Vienna and pulling
it into Blender and in the end I was exporting it to something that could
ER

13
14

The Free Art and Technology (F.A.T.) Lab is an organization dedicated to enriching the
public domain through the research and development of creative technolo


nity sites. I only saw it when my cousin,
who is a big Blender user, e-mailed me the thread. There is about a hundred dedicated Blender users discussing the legitimacy of graffiti in art
and how their tools are used 15 ; pretty interesting but also pretty conservative.
FS

Why do you think the Blender community responded in that way?

It doesn’t surprise me that much. Graffiti is hard to accept, especially
when we are talking about tags. So the only reason we might be slightly
surprised by hearing people in the Open Source community react that
way, is because intellectual property doesn’t translate always to physical
property. Writing your name on someone’s door is something people universally don’t like. I understand. For me the connection makes sense but
just because you make Open Source doesn’t mean you’ll be interested in
graffiti or street art or vice versa. I think if I went to a Blender conference
and gave a talk where I explained sort of where I see these things overlap,
I could make a better case than the three minute video they reacted to.
ER

What about Gesture Markup Language instead of Graffiti Markup
Language?
FS

Essentially GML records x-y-time data. If you ta


o much to do with gesture, but more with how you would draft in a
computer. TEMPT is plotting points, so the time data might not be so
interesting but because it is in the same format, the community might
pick it up and do something with it. All the TEMPT data he writes with
his eyes and it is uploaded to the 000000book site automatically. That
allowed another artist called Benjamin Gaulon 16 who I now know, but
didn’t know at the time, to use it with his Print Ball project. He took the
tag data from a paralyzed graffiti writer in Los Angeles and painted it on
a wall in Dublin. Eye-movement translated into a paint-ball gun ... that
is the kind of collaboration that I hope GML can be the middle-point
for. If that happens, things can start to extrapolate on either end.
ER

You talked about posting a wish-list and being surprised that your
wishes were fulfilled within weeks. Why do you think that a project like
EyeWriter, even if it interests a lot of people, has a hard time gathering
collaborators, while something much more general like GML seems to be
more compelling for people to contribute to?
FS

16

Benjamin Gaulon, Print Ball
http://www.eyewriter.org/paintball-shooting-robot-writes


g that I’m not gonna be the main GML developer. I’m
already not, there’s already people doing way more stuff with it than I am.
ER

FS

How does it work when someone proposes a feature?

ER They just e-mail me (laughs). But right now there hasn’t been a ton
of that because it’s such a simple thing, once you start cramming too much
into it it starts feeling wrong. But all its gonna take is for someone to make
a new app that needs something else and then there will be a reason to
change it but I think the change will always be adding, not removing.

254

The following text is a transcription of a talk by and conversation with Denis Jacquerye in the context of the Libre
Graphics Research Unit in 2012. We invited him in the
context of a session called Co-position where we tried to
re-imagine layout from scratch. The text-encoding standard Unicode and moreover Denis’ precise understanding of the many cultural and political path-dependencies
involved in the making of it, felt like an obvious place
to start. Denis Jacquerye is involved in language technology, software localization and font engineering. He’s
been the co-lead of the DéjàVu Font project and works
with the Af


onal phonetic alphabet but also all the accented ones ...
As they were used in other encodings and that Unicode initially wanted to
be compatible with everything that already exists, they added them. Then
they figured they already had all those accented characters from other encodings so they’re also going to add all the ones they know are used even
though they were not encoded yet. They ended up with different names because they had different policies at the beginning instead of having the same
policy as now. They added here a bunch of Latin letters with marks that
were used for example in transcription. So if you’re transcribing Sanskrit for
example, you would use some of the characters here. Then at some point
they realized that this list of accented characters would get huge, and that
there must be a smarter way to do this. Therefore they figured you could
actually use just parts of those characters as they can be broken apart: a
base letter and marks you add to it. You may have a single character that
can be decomposed canonically between the letter B and a colon dot above,
and you have the character for the dot above in the block of the diacritical
marks. You have access to all the diacritical marks they th


people
have different needs.
There is a lot of documentation within Unicode, but it’s quite hard to find
what you want when you’re just starting, and it’s quite technical. Most of it
is actually in a book they publish at every new version. This book has a few
chapters that describe how Unicode works and how characters should work
together, what properties they have. And all the differences between scripts
are relevant. They also have special cases trying to cater to those needs that
weren’t met or the proposals that were rejected. They have a few examples
in the Unicode book: in some transcription systems they have this sequence
of characters or ligature; a t and a s with a ligature tie and then a dot above.
So the ligature tie means that t and s are pronounced together and the dot
above is err ... has a different meaning (laughs). But it has a meaning! But
because of the way characters work in Unicode, applications actually reorder
it whatever you type in, it’s reordered so that the ligature tie ends up being
moved after the dot. So you always have this representation because you
have the t, there should be the dot, and then there should be the ligature tie
and then the s. So


ngineer wo headed the Committee
for German Industry Standards in the 1920s stated that For the typefaces of
the future neither tools nor fashion will be decisive. His committee supervised
the development of DIN 1451, a standard font that should connect economy of use with legibility, and enhance global communication in service of
the German industry. I think it is no surprise that a similar phrasing can be
found in W3C documents; the idea to unify the people of the world through
a common language re-surfaces and has the same tendency to negate materiality and specificity in favour of seamless translation between media and
markets.
Type historian Ellen Lupton brought up the possibility of designing typographic systems that are accessible but not finite nor operating within a
fixed set of parameters. Although I don’t know what she means by using the
term ‘open universal’, I think this is why we are attracted to Free Software:
it has the potential to open up both the design of parameters as well as their
application. Which leads to your next question.
You mentioned the use of generative design just now. How far do you go into
this? Within the generative design field there seem to be a


opinions and reflections. It’s really interesting from the
content side, at least for me – I don’t dare to speak for Xavier. So that’s
basically how it started.
You developed a constellation of tools that together are producing the book.
Can you explain what the elements are, how this book is made?
333

We decided in the beginning to use Etherpad for the editing. A lot of
documentation during Constant events was done with Etherpad and I found
its very direct access to editing quite inspiring. Earlier this year we prepared a
workshop for the Libre Graphics Meeting, where we’d have a transformation
from Etherpad pages to a printable .pdf. The idea was to somehow separate
the content editing and the rendering. Basically I wanted to follow some
kind of ‘pull logic’. At a certain point in the process, there is an interface
where you can pull out something without the need to interfere too much
with the inner workings of this part. There is the stable part, the editing on
the Etherpad, and there is something, that can be more experimental and
unstable which transforms the content to again a stable, printable version. I
tried to create a custom markdown dialect, meant to be as simple as possible.
It should reduce to some elements, the elements that are actually needed.
For example if we have an interview, what is required from the content side?
We have text and changing speakers. That’s more or less the most important
informations.
So on the first level, we have this simple format and from there the transformation process starts. The idea was to have a level, where basically anybody,
who knows how to use a text editor, can edit the text. But at the same
time it should have more layers of complexity. It actually can get quite
complex during the transformation process. But it should always have this
level, where it’s quite simple. So just text and for example this one markup
element for ok now the speaker changes.
In the beginning we experimented with differents tools, basically small
scripts to perform all kinds of layout task. Xavier for example prepared a
hotglue2svg converter. After that, we thought, why don’t we try to connect different approaches? Not only the very strict markdown to TeX to
.pdf transformations, but to think about, under which circumstances you
would actually prefer a canvas-based approach. What can you do on a canvas
that you can’t do or is much harder with a markup language.
It seems you are developing an adhoc markup language? Is that related to
what you wrote in the workshop description for Operating Systems: 1 Using
operating systems as a metaphor, we try to imagine systems that are both
structured and open?

Yes. The idea was to have these connected/disconected parts. So you have
the part where the content is edited in collaboration and you have the transformer script running separately on the individuals’ computers. For me this
1

http://libregraphicsmeeting.org/2014/program/

334

solved in a way the problem of stability. You can use a quite elaborated,
reliable software like Etherpad and derive something from it without going
to its inner workings. You just pull the content from it, without affecting
the software too much. And you have the part, where it can get quite experimental and unreliable, without affecting all collaborators. Because the
process runs on your own computer and not on the server.
The markup concept comes from the documentation of a video streaming
workshop in Linz. There we wanted to have the possibility to write the
documentation collaboratively during the workshop and we needed also to
solve problems like How about the inclusion of images? That is where the first
markup element came from, which basically just was was a specific line of
text, which indicates ‘here should be this/that image’. If this specific line
appears in the text during the transformation process, it triggers an action
that will look for a specific file in the repository. If the image exists, it will
write the matching macro command for LaTeX. If the image is not in the
repository, it will do nothing. The idea was, that the creation of the .pdf
should happen anyway, e.g. although somebody’s repository might be not at
the latest state and a missing image would prevent LaTeX from rendering
the document. It should also ignore errors, for example if someone mistypes
the name of image or the command. It should not stop the process, but
produce a different output, e.g. wit


ow from working with
code, that allows you to keep differents versions, that keeps flowing in a way.
For example, this specific markup format. It’s basically markdown and
I wanted some more elements, like footnotes and the option to include
citations and comments. I find it quite handy, when you write software,
335

that you have the possibility to include comments that are not part of the
actual output, but part of the working process. I also enjoy this while
writing text (e.g. with LaTeX), because I can keep comments or previous
versions or drafts. So I really have my working version and transform this
to some kind of output.
But back to the etherpash workshop. Commands are basically comments
that will trigger some action, for example the inclusion of a graphic or
changing the font or anything. These commands are referenced in a separate
file, so everybody can have different versions of the commands on their own
machine. It would not affect the other people. For example, if you wanted
to have a much more elaborated GRAFIK command, you could write it and
use it within your transformer of the document or you could introduce new
commands, that are written on the main pad, but would be ignored for
other people, because they have a different reference file. Does this make
sense?
Yes. In a way, there are a lot of grey zones. There are elements that are
global and elements that are local; elements can easily go parallel and none
of the commands actually has always the same output, for everyone.

They can, but they do not need to. You can stick to the very basic version
that comes directly from the repository. You could use this version to create
a .pdf in the ‘original’ way, but you can easily change it on different levels.
You can change the Bash commands that are triggered by the transformer
script, you can work on the LaTeX macros or change the script itself. I
found it quite important to have different levels of complexity. You may go
deeper, but you do not necessarily have to. The Etherpad content is the very
top level. You don’t have to install a software on your computer, you can
just open up a browser and edit the text. So this should make the access to
collaboration easier. Because for a lot of experimental software you spend a
lot of time to get it even running. Most often you have a very steep learning
curve and I found it interesting, to separate this learning curve in a way. So
you have different layers and if you really want to reconfigure on a deep level,
you can, but you do not necessarily have to.
I guess you are talking about collaboration across different levels of complexity, where different elements can transform the final outcome. But if you
take the analogy of CSS, or let’s say a Content Management System that
generates HTML, you could say that this also creates divisions of labour. So
rather than making collaboration possible, it confines people to to different
336

files. How do you think your systems invite people to take part in different
levels? Are these layers porous at all? Can they easily slip between different
roles, let’s say an editor, a typographer and a programmer?
Up to a certain extent it’s like a division of labour. But if you call it a
separation of tasks, it makes defini


ash as glue
between different applications. So if we have a look now at the setup for
the conversations publication, we may see that Bash makes it really easy to
develop own configurations and setups. I actually thought about prefering
the word ‘setup’ to ‘writing software’ ...

Are you saying you prefer setup ‘over’ configuration?

Setup or configuration of software ‘over’ actually writing software. Because
for me it’s often more about connecting different applications. For example,
here we have a browser-based text editor, from which the content is automatically pulled and transformed via text-transform tools and then rendered
as a .pdf. What I find interesting, is that the scripts in between may actually be not very stable, but connect two stables parts. One is the Etherpad,
where the export function is taken ‘as is’ and you’ve got the final state of a
.pdf. In between, I try to have this flexible thing, that just needs to work
at this moment, in my special case. I mean certain scripts may reach quite
an amount of stability, but not necessarily. So it’s very good to have this
fixed state at the end.

You mean the .pdf?

I mean the .pdf, because ... These scri


and
so I don’t really think about other users beside me. For me it’s a whole
different subject to go to the usability level. That’s maybe also a cause for
the open state of the scripts. It would not make much sense – if I want to
have the opportunity for other people to make use of these things – to have
black boxes. Because for this, they are much too fragile. They can be taken
over, but there is no promise of ... convenience? 7 And it’s also important
for myself, because the setups are really tailored to a specific use case and
4
5
6
7

using sed, stream editor for filtering and transforming text
using inkscape on the commandline
using pdftk
... distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without
even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. Free Software Foundation. GNU General Public License, 2007

338

therefore more or less temporary. So I need to be able to read and adapt
them myself.
I know that you usually afterwards you provide a description of how the collage
was made. You publish the scripts, and sketches and intermediary outcomes.
So it seems that usability is more in how you give access to the pr


a keyword reference
to the .pdf, while it’s still in the table of contents ...
What if someone would want to use one of these interviews for something else?
How could this book becoming source for an another publication?
That’s also an advantage of the quite simple source format on the Etherpad.
It can be easily converted to e.g. simple markdown, just by a little script.
I found this quite important – because at this point we’re putting quite an
amount of work into the preparation of the texts – to have it not in a format
that is not parseable. I really wanted to keep the documents transformable
in a easy way. So now you could just have a ~fiveliner, that will pull the text
from the Etherpad and convert it to simple markdown or to HTML.
Wonderful.

If you have a more or less clean source format, then it’s in most cases easy
to convert it to different formats. For example, the Evan Roth interview,
you provided as a ConTeXt file. So with some text manipulation, it was
easy to do the transformation to our Etherpad markup. And it would be
harder if the content is stored as an Open Office document, but still feasible.
.pdf in a way is the worst case, because it’s much harder to extract usable
content again, depending on the creator. So I think it’s important to keep
the content in a readable and understandable source format.

Xavier, what is going to happen next?

Right now, I’m the guy who tests on Scribus, Inkscape. But I don’t know if it’s
the answer to your question.

I was just curious because you have a month to work on this still, so I was
wondering ... are there


re about that ?

Because in the beginning, my first try was to keep the ‘life’ of a conversation in
the text with some things, like indentation or with graphic things, like the choice
342

of the unicode characters. If this can be a way to express a conversation. Because
it’s hard to it with programming stuff so we’re using GUI based software.

It’s a bit coming to the question, what you are doing differently, if you work
with a direct visual feedback. So you don’t try to reduce the content to get
it through a logical structure. Because that’s in a way how the markdown
to LaTeX transformation is doing it. You set certain rules, that may be in
special cases soft rules, but you really try to establish a logical structure and
have a set of rules and apply them. For me, it’s also an interesting question.
If you think of grid based graphic design, where you try to introduce a set
of rules in the beginning and then to keep for the rest of the project, that’s
in a way a very obvious case for computation. Where you just apply a set of
rules. With this application of rules you are a lot confronted in daily graphic
design. And this is also a way of working you learn during your


ys coming back
to a certain grid, then you might as well do it by computation. So that’s
something that we wanted to find out. What advantage do you really gain
from having a canvas-based approach throughout the layout process.
In a way the interviews are very similar, because it’s always peoples speaking,
but at the same time each of the conversations is slightly different. So in what
way is the difference between them made legible, through the same set of rules
or by making specifics rules for each of them?
If you do the layout by hand you can take decisions that would be much
harder to translate to code. For example, how to emphasize certain part
of the text or the speaker. You’re much closer to the interpretation of the
content? You’re not designing the ruleset but you are really working on the
visual design of the content ... The point why it’s interesting to me is because
working as a designer you get quite often reduced to this visual design of the
content, at the same it may make sense in a lot of cases. So it’s a evaluation
of these different approaches. Do you design the ruleset or do you design
the final outcome? And I think it has both advantages and disadvantages



Warnock, John, 159
Watson, Theo, 214
Westenberg, Peter, 187, 213
What You See Is What You Get, 25, 61–
65, 283
Wilkinson, Jamie, 214
Williams, Claire, 99
Williams, George, 14, 23, 79, 299, 351
Wishlist, 246
Wium Lie, Håkon, 142, 146
Workflow, 52, 53, 60, 105, 109, 115, 119,
298, 300, 312
World Wide Web Consortium, 135–142,
145, 146, 304
XML, 52, 80, 144, 148, 158, 163, 175,
213, 214, 216, 233, 234,
301
Yildirim, Muharrem, 242, 245
Yuill, Simon, 232

Free Art License 1.3. (C) Copyleft Attitude, 2007. You can make reproductions and distribute this license verbatim (without any changes). Translation: Jonathan Clarke, Benjamin
Jean, Griselda Jung, Fanny Mourguet, Antoine Pitrou. Thanks to framalang.org
PREAMBLE

The Free Art License grants the right to freely
copy, distribute, and transform creative works
without infringing the author’s rights.
The Free Art License recognizes and protects
these rights. Their implementation has been
reformulated in order to allow everyone to use
creations of the human mind in a creative manner, regardless of their types and ways of expression.
While the public’s access to creations of the human mind usually is restricted by the implementation of copyright law, it is favoured by
the Free Art License. This license intends to
allow the use of a works resources; to establish
new conditions for creating in order to increase
creation opportunities. The Free Art License
grants the right to use a work, and acknowledges the right holders and the users rights and
responsibility.
The invention and development of digital technologies, Internet and Free Software have
changed creation methods: creations of the
human mind can obviously be distributed, exchanged, and transformed. They allow to produce common works to which everyone can
contribute to the benefit of all.
The main rationale for this Free Art License
is to promote and protect these creations of
the human mind according to the principles
of copyleft: freedom to use, copy, distribute,
transform, and prohibition of exclusive appropriation.
DEFINITIONS

“work” either means the initial work, the subsequent works or the common work as defined
hereafter:
“common work” means a work composed of the
initial work and all subsequent contributions to
it (originals and copies). The initial author is
the o

 

Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.