Barok
Poetics of Research
2014


_An unedited version of a talk given at the conference[Public
Library](http://www.wkv-stuttgart.de/en/program/2014/events/public-library/)
held at Württembergischer Kunstverein Stuttgart, 1 November 2014._

_Bracketed sequences are to be reformulated._

Poetics of Research

In this talk I'm going to attempt to identify [particular] cultural
algorithms, ie. processes in which cultural practises and software meet. With
them a sphere is implied in which algorithms gather to form bodies of
practices and in which cultures gather around algorithms. I'm going to
approach them through the perspective of my practice as a cultural worker,
editor and artist, considering practice in the same rank as theory and
poetics, and where theorization of practice can also lead to the
identification of poetical devices.

The primary motivation for this talk is an attempt to figure out where do we
stand as operators, users [and communities] gathering around infrastructures
containing a massive body of text (among other things) and what sort of things
might be considered to make a difference [or to keep making difference].

The talk mainly [considers] the role of text and the word in research, by way
of several figures.

A

A reference, list, scheme, table, index; those things that intervene in the
flow of narrative, illustrating the point, perhaps in a more economic way than
the linear text would do. Yet they don't function as pictures, they are
primarily texts, arranged in figures. Their forms have been
standardised[normalised] over centuries, withstood the transition to the
digital without any significant change, being completely intuitive to the
modern reader. Compared to the body of text they are secondary, run parallel
to it. Their function is however different to that of the punctuation. They
are there neither to shape the narrative nor to aid structuring the argument
into logical blocks. Nor is their function spatial, like in visual poems.
Their positions within a document are determined according to the sequential
order of the text, [standing as attachments] and are there to clarify the
nature of relations among elements of the subject-matter, or to establish
relations with other documents. The [premise] of my talk is that these
_textual figures_ also came to serve as the abstract[relational] models
determining possible relations among documents as such, and in consequence [to
structure conditions [of research]].

B

It can be said that research, as inquiry into a subject-matter, consists of
discrete queries. A query, such as a question about what something is, what
kinds, parts and properties does it have, and so on, can be consulted in
existing documents or generate new documents based on collection of data [in]
the field and through experiment, before proceeding to reasoning [arguments
and deductions]. Formulation of a query is determined by protocols providing
access to documents, which means that there is a difference between collecting
data outside the archive (the undocumented, ie. in the field and through
experiment), consulting with a person--an archivist (expert, librarian,
documentalist), and consulting with a database storing documents. The
phenomena such as [deepening] of specialization and throughout digitization
[have given] privilege to the database as [a|the] [fundamental] means for
research. Obviously, this is a very recent [phenomenon]. Queries were once
formulated in natural language; now, given the fact that databases are queried
[using] SQL language, their interfaces are mere extensions of it and
researchers pose their questions by manipulating dropdowns, checkboxes and
input boxes mashed together on a flat screen being ran by software that in
turn translates them into a long line of conditioned _SELECTs_ and _JOINs_
performed on tables of data.

Specialization, digitization and networking have changed the language of
questioning. Inquiry, once attached to the flesh and paper has been
[entrusted] to the digital and networked. Researchers are querying the black
box.

C

Searching in a collection of [amassed/assembled] [tangible] documents (ie.
bookshelf) is different from searching in a systematically structured
repository (library) and even more so from searching in a digital repository
(digital library). Not that they are mutually exclusive. One can devise
structures and algorithms to search through a printed text, or read books in a
library one by one. They are rather [models] [embodying] various [processes]
associated with the query. These properties of the query might be called [the
sequence], the structure and the index. If they are present in the ways of
querying documents, and we will return to this issue, are they persistent
within the inquiry as such? [wait]

D

This question itself is a rupture in the sequence. It makes a demand to depart
from one narrative [a continuous flow of words] to another, to figure out,
while remaining bound to it [it would be even more as a so-called rhetorical
question]. So there has been one sequence, or line, of the inquiry--about the
kinds of the query and its properties. That sequence itself is a digression,
from within the sequence about what is research and describing its parts
(queries). We are thus returning to it and continue with a question whether
the properties of the inquiry are the same as the properties of the query.

E

But isn't it true that every single utterance occurring in a sequence yields a
query as well? Let's consider the word _utterance_. [wait] It can produce a
number of associations, for example with how Foucault employs the notion of
_énoncé_ in his _Archaeology of Knowledge_ , giving hard time to his English
translators wondering whether _utterance_ or _statement_ is more appropriate,
or whether they are interchangeable, and what impact would each choice have on
his reception in the Anglophone world. Limiting ourselves to textual forms for
now (and not translating his work but pursing a different inquiry), let us say
the utterance is a word [or a phrase or an idiom] in a sequence such as a
sentence, a paragraph, or a document.

## (F) The
structure[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=1
"Edit section: \(F\) The structure")]

This distinction is as old as recorded Western thought since both Plato and
Aristotle differentiate between a word on its own ("the said", a thing said)
and words in the company of other words. For example, Aristotle's _Categories_
[lay] on the [notion] of words on their own, and they are made the subject-
matter of that inquiry. [For him], the ambiguity of connotation words
[produce] lies in their synonymity, understood differently from the moderns--
not as more words denoting a similar thing but rather one word denoting
various things. Categories were outlined as a device to differentiate among
words according to kinds of these things. Every word as such belonged to not
less and not more than one of ten categories.

So it happens to the word _utterance_ , as to any other word uttered in a
sequence, that it poses a question, a query about what share of the spectrum
of possibly denoted things might yield as the most appropriate in a given
context. The more context the more precise share comes to the fore. When taken
out of the context ambiguity prevails as the spectrum unveils in its variety.

Thus single words [as any other utterances] are questions, queries,
themselves, and by occuring in statements, in context, their [means] are being
singled out.

This process is _conditioned_ by what has been formalized as the techniques of
_regulating_ definitions of words.

### (G) The structure: words as
words[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=2
"Edit section: \(G\) The structure: words as words")]

* [![](/images/thumb/c/c8/Philitas_in_P.Oxy.XX_2260_i.jpg/144px-Philitas_in_P.Oxy.XX_2260_i.jpg)](/File:Philitas_in_P.Oxy.XX_2260_i.jpg)

P.Oxy.XX 2260 i: Oxyrhynchus papyrus XX, 2260, column i, with quotation from
Philitas, early 2nd c. CE. 1(http://163.1.169.40/cgi-
bin/library?e=q-000-00---0POxy--00-0-0--0prompt-10---4------0-1l--1-en-50---
20-about-2260--
00031-001-0-0utfZz-8-00&a=d&c=POxy&cl=search&d=HASH13af60895d5e9b50907367)
2(http://en.wikipedia.org/wiki/File:POxy.XX.2260.i-Philitas-
highlight.jpeg)

* [![](/images/thumb/9/9e/Cyclopaedia_1728_page_210_Dictionary_entry.jpg/88px-Cyclopaedia_1728_page_210_Dictionary_entry.jpg)](/File:Cyclopaedia_1728_page_210_Dictionary_entry.jpg)

Ephraim Chambers, _Cyclopaedia, or an Universal Dictionary of Arts and
Sciences_ , 1728, p. 210. 3(http://digicoll.library.wisc.edu/cgi-
bin/HistSciTech/HistSciTech-
idx?type=turn&entity=HistSciTech.Cyclopaedia01.p0576&id=HistSciTech.Cyclopaedia01&isize=L)

* [![](/images/thumb/b/b8/Detail_from_the_Liddell-Scott_Greek-English_Lexicon_c1843.jpg/160px-Detail_from_the_Liddell-Scott_Greek-English_Lexicon_c1843.jpg)](/File:Detail_from_the_Liddell-Scott_Greek-English_Lexicon_c1843.jpg)

Detail from the Liddell-Scott Greek-English Lexicon, c1843.

Dictionaries have had a long life. The ancient Greek scholar and poet Philitas
of Cos living in the 4th c. BCE wrote a vocabulary explaining the meanings of
rare Homeric and other literary words, words from local dialects, and
technical terms. The vocabulary, called _Disorderly Words_ (Átaktoi glôssai),
has been lost, with a few fragments quoted by later authors. One example is
that the word πέλλα (pélla) meant "wine cup" in the ancient Greek region of
Boeotia; contrasted to the same word meaning "milk pail" in Homer's _Iliad_.

Not much has changed in the way how dictionaries constitute order. Selected
archives of statements are queried to yield occurrences of particular words,
various _criteria[indicators]_ are applied to filtering and sorting them and
in turn the spectrum of [denoted] things allocated in this way is structured
into groups and subgroups which are then given, according to other set of
rules, shorter or longer names. These constitute facets of [potential]
meanings of a word.

So there are at least _four_ sets of conditions [structuring] dictionaries.
One is required to delimit an archive[corpus of texts], one to select and give
preference[weights] to occurrences of a word, another to cluster them, and yet
another to abstract[generalize] the subject-matter of each of these clusters.
Needless to say, this is a craft of a few and these criteria are rarely being
disclosed, despite their impact on research, and more generally, their
influence as conditions for production[making] of a so called _common sense_.

It doesn't take that much to reimagine what a dictionary is and what it could
be, especially having large specialized corpora of texts at hand. These can
also serve as aids in production of new words and new meanings.

### (H) The structure: words as knowledge and the
world[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=3
"Edit section: \(H\) The structure: words as knowledge and the world")]

* [![](/images/thumb/0/02/Boethius_Porphyrys_Isagoge.jpg/120px-Boethius_Porphyrys_Isagoge.jpg)](/File:Boethius_Porphyrys_Isagoge.jpg)

Boethius's rendering of a classification tree described in Porphyry's Isagoge
(3th c.), [6th c.] 10th c.
4(http://www.e-codices.unifr.ch/en/sbe/0315/53/medium)

* [![](/images/thumb/d/d0/Cyclopaedia_1728_page_ii_Division_of_Knowledge.jpg/94px-Cyclopaedia_1728_page_ii_Division_of_Knowledge.jpg)](/File:Cyclopaedia_1728_page_ii_Division_of_Knowledge.jpg)

Ephraim Chambers, _Cyclopaedia, or an Universal Dictionary of Arts and
Sciences_ , London, 1728, p. II. 5(http://digicoll.library.wisc.edu/cgi-
bin/HistSciTech/HistSciTech-
idx?type=turn&entity=HistSciTech.Cyclopaedia01.p0015&id=HistSciTech.Cyclopaedia01&isize=L)

* [![](/images/thumb/d/d6/Encyclopedie_1751_Systeme_figure_des_connaissances_humaines.jpg/116px-Encyclopedie_1751_Systeme_figure_des_connaissances_humaines.jpg)](/File:Encyclopedie_1751_Systeme_figure_des_connaissances_humaines.jpg)

Système figuré des connaissances humaines, _Encyclopédie ou Dictionnaire
raisonné des sciences, des arts et des métiers_ , 1751.
6(http://encyclopedie.uchicago.edu/content/syst%C3%A8me-figur%C3%A9-des-
connaissances-humaines)

* [![](/images/thumb/9/96/Haeckel_Ernst_1874_Stammbaum_des_Menschen.jpg/96px-Haeckel_Ernst_1874_Stammbaum_des_Menschen.jpg)](/File:Haeckel_Ernst_1874_Stammbaum_des_Menschen.jpg)

Haeckel - Darwin's tree.

Another _formalized_ and [internalized] process being at play when figuring
out a word is its [containment]. Word is not only structured by way of things
it potentially denotes but also by words it is potentially part of and those
it contains.

The fuzz around categorization of knowledge _and_ the world in the Western
thought can be traced back to Porphyry, if not further. In his introduction to
Aristotle's _Categories_ this 3rd century AD Neoplatonist began expanding the
notions of genus and species into their hypothetic consequences. Aristotle's
brief work outlines ten categories of 'things that are said' (legomena,
λεγόμενα), namely substance (or substantive, {not the same as matter!},
οὐσία), quantity (ποσόν), qualification (ποιόν), a relation (πρός), where
(ποῦ), when (πότε), being-in-a-position (κεῖσθαι), having (or state,
condition, ἔχειν), doing (ποιεῖν), and being-affected (πάσχειν). In his
different work, _Topics_ , Aristotle outlines four kinds of subjects/materials
indicated in propositions/problems from which arguments/deductions start.
These are a definition (όρος), a genus (γένος), a property (ἴδιος), and an
accident (συμβεβηϰόϛ). Porphyry does not explicitly refer _Topics_ , and says
he omits speaking "about genera and species, as to whether they subsist (in
the nature of things) or in mere conceptions only"
8(http://www.ccel.org/ccel/pearse/morefathers/files/porphyry_isagogue_02_translation.htm#C1),
which means he avoids explicating whether he talks about kinds of concepts or
kinds of things in the sensible world. However, the work sparked confusion, as
the following passage [suggests]:

> "[I]n each category there are certain things most generic, and again, others
most special, and between the most generic and the most special, others which
are alike called both genera and species, but the most generic is that above
which there cannot be another superior genus, and the most special that below
which there cannot be another inferior species. Between the most generic and
the most special, there are others which are alike both genera and species,
referred, nevertheless, to different things, but what is stated may become
clear in one category. Substance indeed, is itself genus, under this is body,
under body animated body, under which is animal, under animal rational animal,
under which is man, under man Socrates, Plato, and men particularly." (Owen
1853,
9(http://www.ccel.org/ccel/pearse/morefathers/files/porphyry_isagogue_02_translation.htm#C2))

Porphyry took one of Aristotle's ten categories of the word, substance, and
dissected it using one of his four rhetorical devices, genus. Employing
Aristotle's categories, genera and species as means for logical operations,
for dialectic, Porphyry's interpretation resulted in having more resemblance
to the perceived _structures_ of the world. So they began to bloom.

There were earlier examples, but Porphyry was the most influential in
injecting the _universalist_ version of classification [implying] the figure
of a tree into the [locus] of Aristotle's thought. Knowledge became
monotheistic.

Classification schemes [growing from one point] play a major role in
untangling the format of modern encyclopedia from that of the dictionary
governed by alphabet. Two of the most influential encyclopedias of the 18th
century are cases in the point. Although still keeping 'dictionary' in their
titles, they are conceived not to represent words but knowledge. The [upper-
most] genus of the body was set as the body of knowledge. The English
_Cyclopaedia, or an Universal Dictionary of Arts and Sciences_ (1728) splits
into two main branches: "natural and scientifical" and "artificial and
technical"; these further split down to 47 classes in total, each carrying a
structured list (on the following pages) of thematic articles, serving as
table of contents. The French _Encyclopedia: or a Systematic Dictionary of the
Sciences, Arts, and Crafts_ (1751) [unwinds] from judgement ( _entendement_ ),
branches into memory as history, reason as philosophy, and imagination as
poetry. The logic of containers was employed as an aid not only to deal with
the enormous task of naming and not omiting anything from what is known, but
also for the management of labour of hundreds of writers and researchers, to
create a mechanism for delegating work and the distribution of
responsibilities. Flesh was also more present, in the field research, with
researchers attending workshops and sites of everyday life to annotate it.

The world came forward to unshine the word in other schemes. Darwin's tree of
evolution and some of the modern document classification systems such as
Charles A. Cutter's _Expansive Classification_ (1882) set to classify the
world itself and set the field for what has came to be known as authority
lists structuring metadata in today's computing.

### The structure
(summary)[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=4
"Edit section: The structure \(summary\)")]

Facetization of meaning and branching of knowledge are both the domain of the
unit of utterance.

While lexicographers[dictionarists] structure thought through multi-layered
processes of abstraction of the written record, knowledge growers dissect it
into hierarchies of [mutually] contained notions.

One seek to describe the word as a faceted list of small worlds, another to
describe the world as a structured lists of words. One play prime in the
domain of epistemology, in what is known, controlling the vocabulary, another
in the domain of ontology, in what is, controlling reality.

Every [word] has its given things, every thing has its place, closer or
further from a single word.

The schism between classifying words and classifying the world implies it is
not possible to construct a universal classification scheme[system]. On top of
that, any classification system of words is bound to a corpus of texts it is
operating upon and any classification system of the world again operates with
words which are bound to a vocabulary[lexicon] which is again bound to a
corpus [of texts]. It doesn't mean it would prevent people from trying.
Classifications function as descriptors of and 'inscriptors' upon the world,
imprinting their authority. They operate from [a locus of] their
corpus[context]-specificity. The larger the corpus, the more power it has on
shaping the world, as far as the word shapes it (yes, I do imply Google here,
for which it is a domain to be potentially exploited).

## (J) The
sequence[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=5
"Edit section: \(J\) The sequence")]

The structure-yielding query [of] the single word [shrinks][zuzuje
sa,spresnuje] with preceding and following words. Inquiry proceeds in the flow
that establishes another kind[mode] of relationality, chaining words into the
sequence. While the structuring property of the query brings words apart from
each other, its sequential property establishes continuity and brings these
units into an ordered set.

This is what is responsible for attaching textual figures mentioned earlier
(lists, schemes, tables) to the body of the text. Associations can be also
stated explicitly, by indexing tables and then referring them from a
particular point in the text. The same goes for explicit associations made
between blocks of the text by means of indexed paragraphs, chapters or pages.

From this follows that all utterances point to the following utterance by the
nature of sequential order, and indexing provides means for pointing elsewhere
in the document as well.

A lot can be said about references to other texts. Here, to spare time, I
would refer you to a talk I gave a few months ago and which is online
10(http://monoskop.org/Talks/Communing_Texts).

This is still the realm of print. What happens with document when it is
digitized?

Digitization breaks a document into units of which each is assigned a numbered
position in the sequence of the document. From this perspective digitization
can be viewed as a total indexation of the document. It is converted into
units rendered for machine operations. This sequentiality is made explicit, by
means of an underlying index.

Sequences and chains are orders of one dimension. Their one-dimensional
ordering allows addressability of each element and [random] access. [Jumps]
between [random] addresses are still sequential, processing elements one at a
time.

## (K) The
index[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=6
"Edit section: \(K\) The index")]

* [![](/images/thumb/2/27/Summa_confessorum.1310.jpg/103px-Summa_confessorum.1310.jpg)](/File:Summa_confessorum.1310.jpg)

Summa confessorum [1297-98], 1310.
7(http://www.bl.uk/onlinegallery/onlineex/illmanus/roymanucoll/j/011roy000008g11u00002000.html)

[The] sequencing not only weaves words into statements but activates other
temporalities, and _presents occurrences of words from past statements_. As
now when I am saying the word _utterance_ , each time there surface contexts
in which I have used it earlier.

A long quote from Frederick G. Kilgour, _The Evolution of the Book_ , 1998, pp
76-77:

> "A century of invention of various types of indexes and reference tools
preceded the advent of the first subject index to a specific book, which
occurred in the last years of the thirteenth century. The first subject
indexes were "distinctions," collections of "various figurative or symbolic
meanings of a noun found in the scriptures" that "are the earliest of all
alphabetical tools aside from dictionaries." (Richard and Mary Rouse supply an
example: "Horse = Preacher. Job 39: 'Hast thou given the horse strength, or
encircled his neck with whinning?')

>

> [Concordance] By the end of the third decade of the thirteenth century Hugh
de Saint-Cher had produced the first word concordance. It was a simple word
index of the Bible, with every location of each word listed by [its position
in the Bible specified by book, chapter, and letter indicating part of the
chapter]. Hugh organized several dozen men, assigning to each man an initial
letter to search; for example, the man assigned M was to go through the entire
Bible, list each word beginning with M and give its location. As it was soon
perceived that this original reference work would be even more useful if words
were cited in context, a second concordance was produced, with each word in
lengthy context, but it proved to be unwieldy. [Soon] a third version was
produced, with words in contexts of four to seven words, the model for
biblical concordances ever since.

>

> [Subject index] The subject index, also an innovation of the thirteenth
century, evolved over the same period as did the concordance. Most of the
early topical indexes were designed for writing sermons; some were organized,
while others were apparently sequential without any arrangement. By midcentury
the entries were in alphabetical order, except for a few in some classified
arrangement. Until the end of the century these alphabetical reference works
indexed a small group of books. Finally John of Freiburg added an alphabetical
subject index to his own book, _Summa Confessorum_ (1297—1298). As the Rouses
have put it, 'By the end of the [13]th century the practical utility of the
subject index is taken for granted by the literate West, no longer solely as
an aid for preachers, but also in the disciplines of theology, philosophy, and
both kinds of law.'"

In one sense neither subject-index nor concordane are indexes, they are words
or group of words selected according to given criteria from the body of the
text, each accompanied with a list of identifiers. These identifiers are
elements of an index, whether they represent a page, chapter, column, or other
[kind of] block of text. Every identifier is an unique _address_.

The index is thus an ordering of a sequence by means of associating its
elements with a set of symbols, when each element is given unique combination
of symbols. Different sizes of sets yield different number of variations.
Symbol sets such as an alphabet, arabic numerals, roman numerals, and binary
digits have different proportions between the length of a string of symbols
and the number of possible variations it can contain. Thus two symbols of
English alphabet can store 26^2 various values, of arabic numerals 10^2, of
roman numberals 8^2 and of binary digits 2^2.

Indexation is segmentation, a breaking into segments. From as early as the
13th century the index such as that of sections has served as enabler of
search. The more [detailed] indexation the more precise search results it
enables.

The subject-index and concordance are tables of search results. There is a
direct lineage from the 13th-century biblical concordances and the birth of
computational linguistic analysis, they were both initiated and realised by
priests.

During the World War II, Jesuit Father Roberto Busa began to look for machines
for the automation of the linguistic analysis of the 11 million-word Latin
corpus of Thomas Aquinas and related authors.

Working on his Ph.D. thesis on the concept of _praesens_ in Aquinas he
realised two things:

> "I realized first that a philological and lexicographical inquiry into the
verbal system of an author has t o precede and prepare for a doctrinal
interpretation of his works. Each writer expresses his conceptual system in
and through his verbal system, with the consequence that the reader who
masters this verbal system, using his own conceptual system, has to get an
insight into the writer's conceptual system. The reader should not simply
attach t o the words he reads the significance they have in his mind, but
should try t o find out what significance they had in the writer's mind.
Second, I realized that all functional or grammatical words (which in my mind
are not 'empty' at all but philosophically rich) manifest the deepest logic of
being which generates the basic structures of human discourse. It is .this
basic logic that allows the transfer from what the words mean today t o what
they meant to the writer.

>

> In the works of every philosopher there are two philosophies: the one which
he consciously intends to express and the one he actually uses to express it.
The structure of each sentence implies in itself some philosophical
assumptions and truths. In this light, one can legitimately criticize a
philosopher only when these two philosophies are in contradiction."
11(http://www.alice.id.tue.nl/references/busa-1980.pdf)

Collaborating with the IBM in New York from 1949, the work, a concordance of
all the words of Thomas Aquinas, was finally published in the 1970s in 56
printed volumes (a version is online since 2005
12(http://www.corpusthomisticum.org/it/index.age)). Besides that, an
electronic lexicon for automatic lemmatization of Latin words was created by a
team of ten priests in the scope of two years (in two phases: grouping all the
forms of an inflected word under their lemma, and coding the morphological
categories of each form and lemma), containing 150,000 forms
13(http://www.alice.id.tue.nl/references/busa-1980.pdf#page=4). Father
Busa has been dubbed the father of humanities computing and recently also of
digital humanities.

The subject-index has a crucial role in the printed book. It is the only means
for search the book offers. Subjects composing an index can be selected
according to a classification scheme (specific to a field of an inquiry), for
example as elements of a certain degree (with a given minimum number of
subclasses).

Its role seemingly vanishes in the digital text. But it can be easily
transformed. Besides serving as a table of pre-searched results the subject-
index also gives a distinct idea about content of the book. Two patterns give
us a clue: numbers of occurrences of selected words give subjects weights,
while words that seem specific to the book outweights other even if they don't
occur very often. A selection of these words then serves as a descriptor of
the whole text, and can be thought of as a specific kind of 'tags'.

This process was formalized in a mathematical function in the 1970s, thanks to
a formula by Karen Spärck Jones which she entitled 'inverse document
frequency' (IDF), or in other words, "term specificity". It is measured as a
proportion of texts in the corpus where the word appears at least once to the
total number of texts. When multiplied by the frequency of the word _in_ the
text (divided by the maximum frequency of any word in the text), we get _term
frequency-inverse document frequency_ (tf-idf). In this way we can get an
automated list of subjects which are particular in the text when compared to a
group of texts.

We came to learn it by practice of searching the web. It is a mechanism not
dissimilar to thought process involved in retrieving particular information
online. And search engines have it built in their indexing algorithms as well.

There is a paper proposing attaching words generated by tf-idf to the
hyperlinks when referring websites 14(http://bscit.berkeley.edu/cgi-
bin/pl_dochome?query_src=&format=html&collection=Wilensky_papers&id=3&show_doc=yes).
This would enable finding the referred content even after the link is dead.
Hyperlinks in references in the paper use this feature and it can be easily
tested: 15(http://www.cs.berkeley.edu/~phelps/papers/dissertation-
abstract.html?lexical-
signature=notemarks+multivalent+semantically+franca+stylized).

There is another measure, cosine similarity, which takes tf-idf further and
can be applied for clustering texts according to similarities in their
specificity. This might be interesting as a feature for digital libraries, or
even a way of organising library bottom-up into novel categories, new
discourses could emerge. Or as an aid for researchers to sort through texts,
or even for editors as an aid in producing interesting anthologies.

## Final
remarks[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=7
"Edit section: Final remarks")]

1

New disciplines emerge all the time - most recently, for example, cultural
techniques, software studies, or media archaeology. It takes years, even
decades, before they gain dedicated shelves in libraries or a category in
interlibrary digital repositories. Not that it matters that much. They are not
only sites of academic opportunities but, firstly, frameworks of new
perspectives of looking at the world, new domains of knowledge. From the
perspective of researcher the partaking in a discipline involves negotiating
its vocabulary, classifications, corpus, reference field, and specific
terms[subjects]. Creating new fields involves all that, and more. Even when
one goes against all disciplines.

2

Google can still surprise us.

3

Knowledge has been in the making for millenia. There have been (abstract)
mechanisms established that govern its conditions. We now possess specialized
corpora of texts which are interesting enough to serve as a ground to discuss
and experiment with dictionaries, classifications, indexes, and tools for
references retrieval. These all belong to the poetic devices of knowledge-
making.

4

Command-line example of tf-idf and concordance in 3 steps.

* 1\. Process the files text.1-5.txt and produce freq.1-5.txt with lists of (nonlemmatized) words (in respective texts), ordered by frequency:

> for i in {1..5}; do tr '[A-Z]' '[a-z]' < text.$i.txt | tr -c '[a-z]'
'[\012*]' | tr -d '[:punct:]' | sort | uniq -c | sort -k 1nr | sed '1,1d' >
temp.txt; max=$(awk -vvar=1 -F" " 'NR

1 {print $var}' temp.txt); awk
-vmaxx=$max -F' ' '{printf "%-7.7f %s\n", $1=0.5+($1/(maxx*2)), $2}' > freq.$i.txt; done && rm temp.txt

* 2\. Process the files freq.1-5.txt and produce tfidf.1-5.txt containing a list of words (out of 500 most frequent in respective lists), ordered by weight (specificity for each text):

> for j in {1..5}; do rm freq.$j.txt.temp; lines=$(wc -l freq.$j.txt) && for i
in {1..500}; do word=$(awk -vline="$i" -vfield=2 -F" " 'NR

line {print
$field}' freq.$j.txt); tf=$(awk -vline="$i" -vfield=1 -F" " 'NR

line {print
$field}' freq.$j.txt); count=$(egrep -lw $word freq.?.txt | wc -l); idf=$(echo
"1+l(5/$count)" | bc -l); tfidf=$(echo $tf*$idf | bc); echo $word $tfidf >>
freq.$j.txt.temp; done; sort -k 2nr < freq.$j.txt.temp > tfidf.$j.txt; done

* 3\. Process the files tfidf.1-5.txt and their source text, text.txt, and produce occ.txt with concordance of top 3 words from each of them:

> rm occ.txt && for j in {1..5}; do echo "$j" >> occ.txt; ptx -f -w 150
text.txt.$j > occ.$j.txt; for i in {1..3}; do word=$(awk -vline="$i" -vfield=1
-F" " 'NR

line {print $field}' tfidf.$j.txt); egrep -i
"[alpha:](/index.php?title=Alpha:&action=edit&redlink=1 "Alpha: \(page does
not exist\)") $word" occ.$j.txt >> occ.txt; done; done

Dušan Barok

_Written 23 October - 1 November 2014 in Bratislava and Stuttgart._


1. [Preface to the English Edition](#fpref)
2. [Acknowledgments](#ack)
3. [Introduction: After the End of the Gutenberg Galaxy](#cintro)
1. [Notes](#f6-ntgp-9999)
4. [I: Evolution](#c1)
1. [The Expansion of the Social Basis of Culture](#c1-sec-0002)
2. [The Culturalization of the World](#c1-sec-0006)
3. [The Technologization of Culture](#c1-sec-0009)
4. [From the Margins to the Center of Society](#c1-sec-0013)
5. [Notes](#c1-ntgp-9999)
5. [II: Forms](#c2)
1. [Referentiality](#c2-sec-0002)
2. [Communality](#c2-sec-0009)
3. [Algorithmicity](#c2-sec-0018)
4. [Notes](#c2-ntgp-9999)
6. [III: Politics](#c3)
1. [Post-democracy](#c3-sec-0002)
2. [Commons](#c3-sec-0011)
3. [Against a Lack of Alternatives](#c3-sec-0017)
4. [Notes](#c3-ntgp-9999)

[Preface to the English Edition]{.chapterTitle} {#fpref}
  • ::: {.section}
    This book posits that we in the societies of the (transatlantic) West
    find ourselves in a new condition. I call it "the digital condition"
    because it gained its dominance as computer networks became established
    as the key infrastructure for virtually all aspects of life. However,
    the emergence of this condition pre-dates computer networks. In fact, it
    has deep historical roots, some of which go back to the late nineteenth
    century, but it really came into being after the late 1960s. As many of
    the cultural and political institutions shaped by the previous condition
    -- which McLuhan called the Gutenberg Galaxy -- fell into crisis, new
    forms of personal and collective orientation and organization emerged
    which have been shaped by the affordances of this new condition. Both
    the historical processes which unfolded over a very long time and the
    structural transformation which took place in a myriad of contexts have
    been beyond any deliberate influence. Although obviously caused by
    social actors, the magnitude of such changes was simply too great, too
    distributed, and too complex to be attributed to, or molded by, any
    particular (set of) actor(s).

    Yet -- and this is the core of what motivated me to write this book --
    this does not mean that we have somehow moved beyond the political,
    beyond the realm in which identifiable actors and their projects do
    indeed shape our collective []{#Page_vii type="pagebreak"
    title="vii"}existence, or that there are no alternatives to future
    development already expressed within contemporary dynamics. On the
    contrary, we can see very clearly that as the center -- the established
    institutions shaped by the affordances of the previous condition -- is
    crumbling, more economic and political projects are rushing in to fill
    that void with new institutions that advance their competing agendas.
    These new institutions are well adapted to the digital condition, with
    its chaotic production of vast amounts of information and innovative
    ways of dealing with that.

    From this, two competing trajectories have emerged which are
    simultaneously transforming the space of the political. First, I used
    the term "post-democracy" because it expands possibilities, and even
    requirements, of (personal) participation, while ever larger aspects of
    (collective) decision-making are moved to arenas that are structurally
    disconnected from those of participation. In effect, these arenas are
    forming an authoritarian reality in which a small elite is vastly
    empowered at the expense of everyone else. The purest incarnation of
    this tendency can be seen in the commercial social mass media, such as
    Facebook, Google, and the others, as they were newly formed in this
    condition and have not (yet) had to deal with the complications of
    transforming their own legacy.

    For the other trajectory, I applied the term "commons" because it
    expands both the possibilities of personal participation and agency, and
    those of collective decision-making. This tendency points to a
    redefinition of democracy beyond the hollowed-out forms of political
    representation characterizing the legacy institutions of liberal
    democracy. The purest incarnation of this tendency can be found in the
    institutions that produce the digital commons, such as Wikipedia and the
    various Free Software communities whose work has been and still is
    absolutely crucial for the infrastructural dimensions of the digital
    networks. They are the most advanced because, again, they have not had
    to deal with institutional legacies. But both tendencies are no longer
    confined to digital networks and are spreading across all aspects of
    social life, creating a reality that is, on the structural level,
    surprisingly coherent and, on the social and political level, full of
    contradictions and thus opportunities.[]{#Page_viii type="pagebreak"
    title="viii"}

    I traced some aspects of these developments right up to early 2016, when
    the German version of this book went into production. Since then a lot
    has happened, but I resisted the temptation to update the book for the
    English translation because ideas are always an expression of their
    historical moment and, as such, updating either turns into a completely
    new version or a retrospective adjustment of the historical record.

    What has become increasingly obvious during 2016 and into 2017 is that
    central institutions of liberal democracy are crumbling more quickly and
    dramatically than was expected. The race to replace them has kicked into
    high gear. The main events driving forward an authoritarian renewal of
    politics took place on a national level, in particular the vote by the
    UK to leave the EU (Brexit) and the election of Donald Trump to the
    office of president of the United States of America. The main events
    driving the renewal of democracy took place on a metropolitan level,
    namely the emergence of a network of "rebel cities," led by Barcelona
    and Madrid. There, community-based social movements established their
    candidates in the highest offices. These cities are now putting in place
    practical examples that other cities could emulate and adapt. For the
    concerns of this book, the most important concept put forward is that of
    "technological sovereignty": to bring the technological infrastructure,
    and its developmental potential, back under the control of those who are
    using it and are affected by it; that is, the citizens of the
    metropolis.

    Over the last 18 months, the imbalances between the two trajectories
    have become even more extreme because authoritarian tendencies and
    surveillance capitalism have been strengthened more quickly than the
    commons-oriented practices could establish themselves. But it does not
    change the fact that there are fundamental alternatives embedded in the
    digital condition. Despite structural transformations that affect how we
    do things, there is no inevitability about what we want to do
    individually and, even more importantly, collectively.

    ::: {.poem}
    ::: {.lineGroup}
    Zurich/Vienna, July 2017[]{#Page_ix type="pagebreak" title="ix"}
    :::
    :::
    :::

    [Acknowledgments]{.chapterTitle} {#ack}
  • ::: {.section}
    While it may be conventional to cite one person as the author of a book,
    writing is a process with many collective elements. This book in
    particular draws upon many sources, most of which I am no longer able to
    acknowledge with any certainty. Far too often, important references came
    to me in parenthetical remarks, in fleeting encounters, during trips, at
    the fringes of conferences, or through discussions of things that,
    though entirely new to me, were so obvious to others as not to warrant
    any explication. Often, too, my thinking was influenced by long
    conversations, and it is impossible for me now to identify the precise
    moments of inspiration. As far as the themes of this book are concerned,
    four settings were especially important. The international discourse
    network "nettime," which has a mailing list of 4,500 members and which I
    have been moderating since the late 1990s, represents an inexhaustible
    source of internet criticism and, as a collaborative filter, has enabled
    me to follow a wide range of developments from a particular point of
    view. I am also indebted to the Zurich University of the Arts, where I
    have taught for more than 10 years and where the students have been
    willing to explain to me, again and again, what is already self-evident
    to them. Throughout my time there, I have been able to observe a
    dramatic shift. For today\'s students, the "new" is no longer new but
    simply obvious, whereas they []{#Page_x type="pagebreak" title="x"}have
    experienced many things previously regarded as normal -- such as
    checking out a book from a library (instead of downloading it) -- as
    needlessly complicated. In Vienna, the hub of my life, the World
    Information Institute has for many years provided a platform for
    conferences, publications, and interventions that have repeatedly raised
    the stakes of the discussion and have brought together the most
    interesting range of positions without regard to any disciplinary
    boundaries. Housed in Vienna, too, is the Technopolitics Project, a
    non-institutionalized circle of researchers and artists whose
    discussions of techno-economic paradigms have informed this book in
    fundamental ways and which has offered multiple opportunities for me to
    workshop inchoate ideas.

    Not everything, however, takes place in diffuse conversations and
    networks. I was also able to rely on the generous support of several
    individuals who, at one stage or another, read through, commented upon,
    and made crucial improvements to the manuscript: Leonhard Dobusch,
    Günther Hack, Katja Meier, Florian Cramer, Cornelia Sollfrank, Beat
    Brogle, Volker Grassmuck, Ursula Stalder, Klaus Schönberger, Konrad
    Becker, Armin Medosch, Axel Stockburger, and Gerald Nestler. Special
    thanks are owed to Rebina Erben-Hartig, who edited the original German
    manuscript and greatly improved its readability. I am likewise grateful
    to Heinrich Greiselberger and Christian Heilbronn of the Suhrkamp
    Verlag, whose faith in the book never wavered despite several delays.
    Regarding the English version at hand, it has been a privilege to work
    with a translator as skillful as Valentine Pakis. Over the past few
    years, writing this book might have been the most import­ant project in
    my life had it not been for Andrea Mayr. In this regard, I have been
    especially fortunate.[]{#Page_xi type="pagebreak"
    title="xi"}[]{#Page_xii type="pagebreak" title="xii"}
    :::

    Introduction [After the End of the Gutenberg Galaxy]{.chapterTitle} []{.chapterSubTitle} {#cintro}

    ::: {.section}
    The show had already been going on for more than three hours, but nobody
    was bothered by this. Quite the contrary. The tension in the venue was
    approaching its peak, and the ratings were through the roof. Throughout
    all of Europe, 195 million people were watching the spectacle on
    television, and the social mass media were gaining steam. On Twitter,
    more than 47,000 messages were being sent every minute with the hashtag
    \#Eurovision.[^1^](#f6-note-0001){#f6-note-0001a} The outcome was
    decided shortly after midnight: Conchita Wurst, the bearded diva, was
    announced the winner of the 2014 Eurovision Song Contest. Cheers erupted
    as the public celebrated the victor -- but also itself. At long last,
    there was more to the event than just another round of tacky television
    programming ("This is Ljubljana calling!"). Rather, a statement was made
    -- a statement in favor of tolerance and against homophobia, for
    diversity and for the right to define oneself however one pleases. And
    Europe sent this message in the midst of a crisis and despite ongoing
    hostilities, not to mention all of the toxic rumblings that could be
    heard about decadence, cultural decay, and Gayropa. Visibly moved, the
    Austrian singer let out an exclamation -- "We are unity, and we are
    unstoppable!" -- as she returned to the stage with wobbly knees to
    accept the trophy.

    With her aesthetically convincing performance, Conchita succeeded in
    unleashing a strong desire for personal []{#Page_1 type="pagebreak"
    title="1"}self-discovery, for community, and for overcoming stale
    conventions. And she did this through a character that mainstream
    society would have considered paradoxical and deviant not long ago but
    has since come to understand: attractive beyond the dichotomy of man and
    woman, explicitly artificial and yet entirely authentic. This peculiar
    conflation of artificiality and naturalness is equally present in
    Berndnaut Smilde\'s photographic work of a real indoor cloud (*Nimbus*,
    2010) on the cover of this book. Conchita\'s performance was also on a
    formal level seemingly paradoxical: extremely focused and completely
    open. Unlike most of the other acts, she took the stage alone, and
    though she hardly moved at all, she nevertheless incited the audience to
    participate in numerous ways and genuinely to act out the motto of the
    contest ("Join us!"). Throughout the early rounds of the competition,
    the beard, which was at first so provocative, transformed into a
    free-floating symbol that the public began to appropriate in various
    ways. Men and women painted Conchita-like beards on their faces,
    newspapers printed beards to be cut out, and fans crocheted beards. Not
    only did someone Photoshop a beard on to a painting of Empress Sissi of
    Austria, but King Willem-Alexander of the Netherlands even tweeted a
    deceptively realistic portrait of his wife, Queen Máxima, wearing a
    beard. From one of the biggest stages of all, the evening of Wurst\'s
    victory conveyed an impression of how much the culture of Europe had
    changed in recent years, both in terms of its content and its forms.
    That which had long been restricted to subcultural niches -- the
    fluidity of gender iden­tities, appropriation as a cultural technique,
    or the conflation of reception and production, for instance -- was now
    part of the mainstream. Even while sitting in front of the television,
    this mainstream was no longer just a private audience but rather a
    multitude of singular producers whose networked activity -- on location
    or on social mass media -- lent particular significance to the occasion
    as a moment of collective self-perception.

    It is more than half a century since Marshall McLuhan announced the end
    of the Modern era, a cultural epoch that he called the Gutenberg Galaxy
    in honor of the print medium by which it was so influenced. What was
    once just an abstract speculation of media theory, however, now
    describes []{#Page_2 type="pagebreak" title="2"}the concrete reality of
    our everyday life. What\'s more, we have moved well past McLuhan\'s
    diagnosis: the erosion of old cultural forms, institutions, and
    certainties is not just something we affirm, but new ones have already
    formed whose contours are easy to identify not only in niche sectors but
    in the mainstream. Shortly before Conchita\'s triumph, Facebook thus
    expanded the gender-identity options for its billion-plus users from 2
    to 60. In addition to "male" and "female," users of the English version
    of the site can now choose from among the following categories:

    ::: {.extract}
    Agender, Androgyne, Androgynes, Androgynous, Asexual, Bigender, Cis, Cis
    Female, Cis Male, Cis Man, Cis Woman, Cisgender, Cisgender Female,
    Cisgender Male, Cisgender Man, Cisgender Woman, Female to Male (FTM),
    Female to Male Trans Man, Female to Male Transgender Man, Female to Male
    Transsexual Man, Gender Fluid, Gender Neutral, Gender Nonconforming,
    Gender Questioning, Gender Variant, Genderqueer, Hermaphrodite,
    Intersex, Intersex Man, Intersex Person, Intersex Woman, Male to Female
    (MTF), Male to Female Trans Woman, Male to Female Transgender Woman,
    Male to Female Transsexual Woman, Neither, Neutrois, Non-Binary, Other,
    Pangender, Polygender, T\*Man, Trans, Trans Female, Trans Male, Trans
    Man, Trans Person, Trans\*Female, Trans\*Male, Trans\*Man,
    Trans\*Person, Trans\*Woman, Transexual, Transexual Female, Transexual
    Male, Transexual Man, Transexual Person, Transexual Woman, Transgender
    Female, Transgender Person, Transmasculine, T\*Woman, Two\*Person,
    Two-Spirit, Two-Spirit Person.
    :::

    This enormous proliferation of cultural possibilities is an expression
    of what I will refer to below as the digital condition. Far from being
    universally welcomed, its growing presence has also instigated waves of
    nostalgia, diffuse resentments, and intellectual panic. Conservative and
    reactionary movements, which oppose such developments and desire to
    preserve or even re-create previous conditions, have been on the rise.
    Likewise in 2014, for instance, a cultural dispute broke out in normally
    subdued Baden-Würtemberg over which forms of sexual partnership should
    be mentioned positively in the sexual education curriculum. Its impetus
    was a working paper released at the end of 2013 by the state\'s
    []{#Page_3 type="pagebreak" title="3"}Ministry of Culture. Among other
    things, it proposed that adolescents "should confront their own sexual
    identity and orientation \[...\] from a position of acceptance with
    respect to sexual diversity."[^2^](#f6-note-0002){#f6-note-0002a} In a
    short period of time, a campaign organized mainly through social mass
    media collected more than 200,000 signatures in opposition to the
    proposal and submitted them to the petitions committee at the state
    parliament. At that point, the government responded by putting the
    initiative on ice. However, according to the analysis presented in this
    book, leaving it on ice creates a precarious situation.

    The rise and spread of the digital condition is the result of a
    wide-ranging and irreversible cultural transformation, the beginnings of
    which can in part be traced back to the nineteenth century. Since the
    1960s, however, this shift has accelerated enormously and has
    encompassed increasingly broader spheres of social life. More and more
    people have been participating in cultural processes; larger and larger
    dimensions of existence have become battlegrounds for cultural disputes;
    and social activity has been intertwined with increasingly complex
    technologies, without which it would hardly be possible to conceive of
    these processes, let alone achieve them. The number of competing
    cultural projects, works, reference points, and reference systems has
    been growing rapidly. This, in turn, has caused an escalating crisis for
    the established forms and institutions of culture, which are poorly
    equipped to deal with such an inundation of new claims to meaning. Since
    roughly the year 2000, many previously independent developments have
    been consolidating, gaining strength and modifying themselves to form a
    new cultural constellation that encompasses broad segments of society --
    a new galaxy, as McLuhan might have
    said.[^3^](#f6-note-0003){#f6-note-0003a} These days it is relatively
    easy to recognize the specific forms that characterize it as a whole and
    how these forms have contributed to new, contradictory and
    conflict-laden political dynamics.

    My argument, which is restricted to cultural developments in the
    (transatlantic) West, is divided into three chapters. In the first, I
    will outline the *historical* developments that have given rise to this
    quantitative and qualitative change and have led to the crisis faced by
    the institutions of the late phase of the Gutenberg Galaxy, which
    defined the last third []{#Page_4 type="pagebreak" title="4"}of the
    twentieth century.[^4^](#f6-note-0004){#f6-note-0004a} The expansion of
    the social basis of cultural processes will be traced back to changes in
    the labor market, to the self-empowerment of marginalized groups, and to
    the dissolution of centralized cultural geography. The broadening of
    cultural fields will be discussed in terms of the rise of design as a
    general creative discipline, and the growing significance of complex
    technologies -- as fundamental components of everyday life -- will be
    tracked from the beginnings of independent media up to the development
    of the internet as a mass medium. These processes, which at first
    unfolded on their own and may have been reversible on an individual
    basis, are integrated today and represent a socially domin­ant component
    of the coherent digital condition. From the perspective of cultural
    studies and media theory, the second chapter will delineate the already
    recognizable features of this new culture. Concerned above all with the
    analysis of forms, its focus is thus on the question of "how" cultural
    practices operate. It is only because specific forms of culture,
    exchange, and expression are prevalent across diverse var­ieties of
    content, social spheres, and locations that it is even possible to speak
    of the digital condition in the singular. Three examples of such forms
    stand out in particular. *Referentiality* -- that is, the use of
    existing cultural materials for one\'s own production -- is an essential
    feature of many methods for inscribing oneself into cultural processes.
    In the context of unmanageable masses of shifting and semantically open
    reference points, the act of selecting things and combining them has
    become fundamental to the production of meaning and the constitution of
    the self. The second feature that characterizes these processes is
    *communality*. It is only through a collectively shared frame of
    reference that meanings can be stabilized, possible courses of action
    can be determined, and resources can be made available. This has given
    rise to communal formations that generate self-referential worlds, which
    in turn modulate various dimensions of existence -- from aesthetic
    preferences to the methods of biological reproduction and the rhythms of
    space and time. In these worlds, the dynamics of network power have
    reconfigured notions of voluntary and involuntary behavior, autonomy,
    and coercion. The third feature of the new cultural landscape is its
    *algorithmicity*. It is characterized, in other []{#Page_5
    type="pagebreak" title="5"}words, by automated decision-making processes
    that reduce and give shape to the glut of information, by extracting
    information from the volume of data produced by machines. This extracted
    information is then accessible to human perception and can serve as the
    basis of singular and communal activity. Faced with the enormous amount
    of data generated by people and machines, we would be blind were it not
    for algorithms.

    The third chapter will focus on *political dimensions*. These are the
    factors that enable the formal dimensions described in the preceding
    chapter to manifest themselves in the form of social, political, and
    economic projects. Whereas the first chapter is concerned with long-term
    and irreversible histor­ical processes, and the second outlines the
    general cultural forms that emerged from these changes with a certain
    degree of inevitability, my concentration here will be on open-ended
    dynamics that can still be influenced. A contrast will be made between
    two political tendencies of the digital condition that are already quite
    advanced: *post-democracy* and *commons*. Both take full advantage of
    the possibilities that have arisen on account of structural changes and
    have advanced them even further, though in entirely different
    directions. "Post-democracy" refers to strategies that counteract the
    enormously expanded capacity for social communication by disconnecting
    the possibility to participate in things from the ability to make
    decisions about them. Everyone is allowed to voice his or her opinion,
    but decisions are ultimately made by a select few. Even though growing
    numbers of people can and must take responsibility for their own
    activity, they are unable to influence the social conditions -- the
    social texture -- under which this activity has to take place. Social
    mass media such as Facebook and Google will receive particular attention
    as the most conspicuous manifestations of this tendency. Here, under new
    structural provisions, a new combination of behavior and thought has
    been implemented that promotes the normalization of post-democracy and
    contributes to its otherwise inexplicable acceptance in many areas of
    society. "Commons," on the contrary, denotes approaches for developing
    new and comprehensive institutions that not only directly combine
    participation and decision-making but also integrate economic, social,
    and ethical spheres -- spheres that Modernity has tended to keep
    apart.[]{#Page_6 type="pagebreak" title="6"}

    Post-democracy and commons can be understood as two lines of development
    that point beyond the current crisis of liberal democracy and represent
    new political projects. One can be characterized as an essentially
    authoritarian system, the other as a radical expansion and renewal of
    democracy, from the notion of representation to that of participation.

    Even though I have brought together a number of broad perspectives, I
    have refrained from discussing certain topics that a book entitled *The
    Digital Condition* might be expected to address, notably the matter of
    copyright, for one example. This is easy to explain. As regards the new
    forms at the heart of this book, none of these developments requires or
    justifies copyright law in its present form. In any case, my thoughts on
    the matter were published not long ago in another book, so there is no
    need to repeat them here.[^5^](#f6-note-0005){#f6-note-0005a} The theme
    of privacy will also receive little attention. This is not because I
    share the view, held by proponents of "post-privacy," that it would be
    better for all personal information to be made available to everyone. On
    the contrary, this position strikes me as superficial and naïve. That
    said, the political function of privacy -- to safeguard a degree of
    personal autonomy from powerful institutions -- is based on fundamental
    concepts that, in light of the developments to be described below,
    urgently need to be updated. This is a task, however, that would take me
    far beyond the scope of the present
    book.[^6^](#f6-note-0006){#f6-note-0006a}

    Before moving on to the first chapter, I should first briefly explain my
    somewhat unorthodox understanding of the central concepts in the title
    of the book -- "condition" and "digital." In what follows, the term
    "condition" will be used to designate a cultural condition whereby the
    processes of social meaning -- that is, the normative dimension of
    existence -- are explicitly or implicitly negotiated and realized by
    means of singular and collective activity. Meaning, however, does not
    manifest itself in signs and symbols alone; rather, the practices that
    engender it and are inspired by it are consolidated into artifacts,
    institutions, and lifeworlds. In other words, far from being a symbolic
    accessory or mere overlay, culture in fact directs our actions and gives
    shape to society. By means of materialization and repetition, meaning --
    both as claim and as reality -- is made visible, productive, and
    negotiable. People are free to accept it, reject it, or ignore
    []{#Page_7 type="pagebreak" title="7"}it altogether. Social meaning --
    that is, meaning shared by multiple people -- can only come about
    through processes of exchange within larger or smaller formations.
    Production and reception (to the extent that it makes any sense to
    distinguish between the two) do not proceed linearly here, but rather
    loop back and reciprocally influence one another. In such processes, the
    participants themselves determine, in a more or less binding manner, how
    they stand in relation to themselves, to each other, and to the world,
    and they determine the frame of reference in which their activity is
    oriented. Accordingly, culture is not something static or something that
    is possessed by a person or a group, but rather a field of dispute that
    is subject to the activities of multiple ongoing changes, each happening
    at its own pace. It is characterized by processes of dissolution and
    constitution that may be collaborative, oppositional, or simply
    operating side by side. The field of culture is pervaded by competing
    claims to power and mechanisms for exerting it. This leads to conflicts
    about which frames of reference should be adopted for different fields
    and within different social groups. In such conflicts,
    self-determination and external determination interact until a point is
    reached at which both sides are mutually constituted. This, in turn,
    changes the conditions that give rise to shared meaning and personal
    identity.

    In what follows, this broadly post-structuralist perspective will inform
    my discussion of the causes and formational conditions of cultural
    orders and their practices. Culture will be conceived throughout as
    something heterogeneous and hybrid. It draws from many sources; it is
    motivated by the widest possible variety of desires, intentions, and
    compulsions; and it mobilizes whatever resources might be necessary for
    the constitution of meaning. This emphasis on the materiality of culture
    is also reflected in the concept of the digital. Media are relational
    technologies, which means that they facilitate certain types of
    connection between humans and
    objects.[^7^](#f6-note-0007){#f6-note-0007a} "Digital" thus denotes the
    set of relations that, on the infrastructural basis of digital networks,
    is realized today in the production, use, and transform­ation of
    material and immaterial goods, and in the constitution and coordination
    of personal and collective activity. In this regard, the focus is less
    on the dominance of a certain class []{#Page_8 type="pagebreak"
    title="8"}of technological artifacts -- the computer, for instance --
    and even less on distinguishing between "digital" and "analog,"
    "material" and "immaterial." Even in the digital condition, the analog
    has not gone away. Rather, it has been re-evaluated and even partially
    upgraded. The immaterial, moreover, is never entirely without
    materiality. On the contrary, the fleeting impulses of digital
    communication depend on global and unmistakably material infrastructures
    that extend from mines beneath the surface of the earth, from which rare
    earth metals are extracted, all the way into outer space, where
    satellites are circling around above us. Such things may be ignored
    because they are outside the experience of everyday life, but that does
    not mean that they have disappeared or that they are of any less
    significance. "Digital" thus refers to historically new possibilities
    for constituting and connecting various human and non-human actors,
    which is not limited to digital media but rather appears everywhere as a
    relational paradigm that alters the realm of possibility for numerous
    materials and actors. My understanding of the digital thus approximates
    the concept of the "post-digital," which has been gaining currency over
    the past few years within critical media cultures. Here, too, the
    distinction between "new" and "old" media and all of the ideological
    baggage associated with it -- for instance, that the new represents the
    future while the old represents the past -- have been rejected. The
    aesthetic projects that continue to define the image of the "digital" --
    immateriality, perfection, and virtuality -- have likewise been
    discarded.[^8^](#f6-note-0008){#f6-note-0008a} Above all, the
    "post-digital" is a critical response to this techno-utopian aesthetic
    and its attendant economic and political perspectives. According to the
    cultural theorist Florian Cramer, the concept accommodates the fact that
    "new ethical and cultural conventions which became mainstream with
    internet communities and open-source culture are being retroactively
    applied to the making of non-digital and post-digital media
    products."[^9^](#f6-note-0009){#f6-note-0009a} He thus cites the trend
    that process-based practices oriented toward open interaction, which
    first developed within digital media, have since begun to appear in more
    and more contexts and in an increasing number of
    materials.[^10[]{#Page_9 type="pagebreak"
    title="9"}^](#f6-note-0010){#f6-note-0010a}

    For the historical, cultural-theoretical, and political perspectives
    developed in this book, however, the concept of the post-digital is
    somewhat problematic, for it requires the narrow context of media art
    and its fixation on technology in order to become a viable
    counter-position. Without this context, certain misunderstandings are
    impossible to avoid. The prefix "post-," for instance, is often
    interpreted in the sense that something is over or that we have at least
    grasped the matters at hand and can thus turn to something new. The
    opposite is true. The most enduringly relevant developments are only now
    beginning to adopt a specific form, long after digital infrastructures
    and the practices made popular by them have become part of our everyday
    lives. Or, as the communication theorist and consultant Clay Shirky puts
    it, "Communication tools don\'t get socially interesting until they get
    technologically boring."[^11^](#f6-note-0011){#f6-note-0011a} For it is
    only today, now that our fascination for this technology has waned and
    its promises sound hollow, that culture and society are being defined by
    the digital condition in a comprehensive sense. Before, this was the
    case in just a few limited spheres. It is this hybridization and
    solidification of the digital -- the presence of the digital beyond
    digital media -- that lends the digital condition its dominance. As to
    the concrete realities in which these things will materialize, this is
    currently being decided in an open and ongoing process. The aim of this
    book is to contribute to our understanding of this process.[]{#Page_10
    type="pagebreak" title="10"}
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#f6-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#f6-note-0001a){#f6-note-0001}  Dan Biddle, "Five Million Tweets for
    \#Eurovision 2014," *Twitter UK* (May 11, 2014), online.

    [2](#f6-note-0002a){#f6-note-0002}  Ministerium für Kultus, Jugend und
    Sport -- Baden-Württemberg, "Bildungsplanreform 2015/2016 -- Verankerung
    von Leitprinzipien," online \[--trans.\].

    [3](#f6-note-0003a){#f6-note-0003}  As early as 1995, Wolfgang Coy
    suggested that McLuhan\'s metaphor should be supplanted by the concept
    of the "Turing Galaxy," but this never caught on. See his introduction
    to the German edition of *The Gutenberg Galaxy*: "Von der Gutenbergschen
    zur Turingschen Galaxis: Jenseits von Buchdruck und Fernsehen," in
    Marshall McLuhan, *Die Gutenberg Galaxis: Das Ende des Buchzeitalters*,
    (Cologne: Addison-Wesley, 1995), pp. vii--xviii.[]{#Page_176
    type="pagebreak" title="176"}

    [4](#f6-note-0004a){#f6-note-0004}  According to the analysis of the
    Spanish sociologist Manuel Castells, this crisis began almost
    simultaneously in highly developed capitalist and socialist societies,
    and it did so for the same reason: the paradigm of "industrialism" had
    reached the limits of its productivity. Unlike the capitalist societies,
    which were flexible enough to tame the crisis and reorient their
    economies, the socialism of the 1970s and 1980s experienced stagnation
    until it ultimately, in a belated effort to reform, collapsed. See
    Manuel Castells, *End of Millennium*, 2nd edn (Oxford: Wiley-Blackwell,
    2010), pp. 5--68.

    [5](#f6-note-0005a){#f6-note-0005}  Felix Stalder, *Der Autor am Ende
    der Gutenberg Galaxis* (Zurich: Buch & Netz, 2014).

    [6](#f6-note-0006a){#f6-note-0006}  For my preliminary thoughts on this
    topic, see Felix Stalder, "Autonomy and Control in the Era of
    Post-Privacy," *Open: Cahier on Art and the Public Domain* 19 (2010):
    78--86; and idem, "Privacy Is Not the Antidote to Surveillance,"
    *Surveillance & Society* 1 (2002): 120--4. For a discussion of these
    approaches, see the working paper by Maja van der Velden, "Personal
    Autonomy in a Post-Privacy World: A Feminist Technoscience Perspective"
    (2011), online.

    [7](#f6-note-0007a){#f6-note-0007}  Accordingly, the "new social" media
    are mass media in the sense that they influence broadly disseminated
    patterns of social relations and thus shape society as much as the
    traditional mass media had done before them.

    [8](#f6-note-0008a){#f6-note-0008}  Kim Cascone, "The Aesthetics of
    Failure: 'Post-Digital' Tendencies in Contemporary Computer Music,"
    *Computer Music Journal* 24/2 (2000): 12--18.

    [9](#f6-note-0009a){#f6-note-0009}  Florian Cramer, "What Is
    'Post-Digital'?" *Post-Digital Research* 3 (2014), online.

    [10](#f6-note-0010a){#f6-note-0010}  In the field of visual arts,
    similar considerations have been made regarding "post-internet art." See
    Artie Vierkant, "The Image Object Post-Internet,"
    [jstchillin.org](http://jstchillin.org) (December 2010), online; and Ian
    Wallace, "What Is Post-Internet Art? Understanding the Revolutionary New
    Art Movement," *Artspace* (March 18, 2014), online.

    [11](#f6-note-0011a){#f6-note-0011}  Clay Shirky, *Here Comes Everybody:
    The Power of Organizing without Organizations* (New York: Penguin,
    2008), p. 105.
    :::
    :::

    [I]{.chapterNumber} [Evolution]{.chapterTitle} {#c1}
    =
    ::: {.section}
    Many authors have interpreted the new cultural realities that
    characterize our daily lives as a direct consequence of technological
    developments: the internet is to blame! This assumption is not only
    empirically untenable; it also leads to a problematic assessment of the
    current situation. Apparatuses are represented as "central actors," and
    this suggests that new technologies have suddenly revolutionized a
    situation that had previously been stable. Depending on one\'s point of
    view, this is then regarded as "a blessing or a
    curse."[^1^](#c1-note-0001){#c1-note-0001a} A closer examination,
    however, reveals an entirely different picture. Established cultural
    practices and social institutions had already been witnessing the
    erosion of their self-evident justification and legitimacy, long before
    they were faced with new technologies and the corresponding demands
    these make on individuals. Moreover, the allegedly new types of
    coordination and cooperation are also not so new after all. Many of them
    have existed for a long time. At first most of them were totally
    separate from the technologies for which, later on, they would become
    relevant. It is only in retrospect that these developments can be
    identified as beginnings, and it can be seen that much of what we regard
    today as novel or revolutionary was in fact introduced at the margins of
    society, in cultural niches that were unnoticed by the dominant actors
    and institutions. The new technologies thus evolved against a
    []{#Page_11 type="pagebreak" title="11"}background of processes of
    societal transformation that were already under way. They could only
    have been developed once a vision of their potential had been
    formulated, and they could only have been disseminated where demand for
    them already existed. This demand was created by social, political, and
    economic crises, which were themselves initiated by changes that were
    already under way. The new technologies seemed to provide many differing
    and promising answers to the urgent questions that these crises had
    prompted. It was thus a combination of positive vision and pressure that
    motivated a great variety of actors to change, at times with
    considerable effort, the established processes, mature institutions, and
    their own behavior. They intended to appropriate, for their own
    projects, the various and partly contradictory possibilities that they
    saw in these new technologies. Only then did a new technological
    infrastructure arise.

    This, in turn, created the preconditions for previously independent
    developments to come together, strengthening one another and enabling
    them to spread beyond the contexts in which they had originated. Thus,
    they moved from the margins to the center of culture. And by
    intensifying the crisis of previously established cultural forms and
    institutions, they became dominant and established new forms and
    institutions of their own.
    :::

    ::: {.section}
    The Expansion of the Social Basis of Culture {#c1-sec-0002}
    --------------------------------------------

    Watching television discussions from the 1950s and 1960s today, one is
    struck not only by the billows of cigarette smoke in the studio but also
    by the homogeneous spectrum of participants. Usually, it was a group of
    white and heteronormatively behaving men speaking with one
    another,[^2^](#c1-note-0002){#c1-note-0002a} as these were the people
    who held the important institutional positions in the centers of the
    West. As a rule, those involved were highly specialized representatives
    from the cultural, economic, scientific, and political spheres. Above
    all, they were legitimized to appear in public to articulate their
    opinions, which were to be regarded by others as relevant and worthy of
    discussion. They presided over the important debates of their time. With
    few exceptions, other actors and their deviant opinions -- there
    []{#Page_12 type="pagebreak" title="12"}has never been a time without
    them -- were either not taken seriously at all or were categorized as
    indecent, incompetent, perverse, irrelevant, backward, exotic, or
    idiosyncratic.[^3^](#c1-note-0003){#c1-note-0003a} Even at that time,
    the social basis of culture was beginning to expand, though the actors
    at the center of the discourse had failed to notice this. Communicative
    and cultural pro­cesses were gaining significance in more and more
    places, and excluded social groups were self-consciously developing
    their own language in order to intervene in the discourse. The rise of
    the knowledge economy, the increasingly loud critique of
    heteronormativity, and a fundamental cultural critique posed by
    post-colonialism enabled a greater number of people to participate in
    public discussions. In what follows, I will subject each of these three
    phenomena to closer examin­ation. In order to do justice to their
    complexity, I will treat them on different levels: I will depict the
    rise of the knowledge economy as a structural change in labor; I will
    reconstruct the critique of heteronormativity by outlining the origins
    and transformations of the gay movement in West Germany; and I will
    discuss post-colonialism as a theory that introduced new concepts of
    cultural multiplicity and hybridization -- concepts that are now
    influencing the digital condition far beyond the limits of the
    post-colonial discourse, and often without any reference to this
    discourse at all.

    ::: {.section}
    ### The growth of the knowledge economy {#c1-sec-0003}

    At the beginning of the 1950s, the Austrian-American economist Fritz
    Machlup was immersed in his study of the polit­ical economy of
    monopoly.[^4^](#c1-note-0004){#c1-note-0004a} Among other things, he was
    concerned with patents and copyright law. In line with the neo-classical
    Austrian School, he considered both to be problematic (because
    state-created) monopolies.[^5^](#c1-note-0005){#c1-note-0005a} The
    longer he studied the monopoly of the patent system in particular, the
    more far-reaching its consequences seemed to him. He maintained that the
    patent system was intertwined with something that might be called the
    "economy of invention" -- ultimately, patentable insights had to be
    produced in the first place -- and that this was in turn part of a much
    larger economy of knowledge. The latter encompassed government agencies
    as well as institutions of education, research, and development
    []{#Page_13 type="pagebreak" title="13"}(that is, schools, universities,
    and certain corporate laboratories), which had been increasing steadily
    in number since Roosevelt\'s New Deal. Yet it also included the
    expanding media sector and those industries that were responsible for
    providing technical infrastructure. Machlup subsumed all of these
    institutions and sectors under the concept of the "knowledge economy," a
    term of his own invention. Their common feature was that essential
    aspects of their activities consisted in communicating things to other
    people ("telling anyone anything," as he put it). Thus, the employees
    were not only recipients of information or instructions; rather, in one
    way or another, they themselves communicated, be it merely as a
    secretary who typed up, edited, and forwarded a piece of shorthand
    dictation. In his book *The Production and Distribution of Knowledge in
    the United States*, published in 1962, Machlup gathered empirical
    material to demonstrate that the American economy had entered a new
    phase that was distinguished by the production, exchange, and
    application of abstract, codified
    knowledge.[^6^](#c1-note-0006){#c1-note-0006a} This opinion was no
    longer entirely novel at the time, but it had never before been
    presented in such an empirically detailed and comprehensive
    manner.[^7^](#c1-note-0007){#c1-note-0007a} The extent of the knowledge
    economy surprised Machlup himself: in his book, he concluded that as
    much as 43 percent of all labor activity was already engaged in this
    sector. This high number came about because, until then, no one had put
    forward the idea of understanding such a variety of activities as a
    single unit.

    Machlup\'s categorization was indeed quite innovative, for the dynamics
    that propelled the sectors that he associated with one another not only
    were very different but also had originated as an integral component in
    the development of the industrial production of goods. They were more of
    an extension of such production than a break with it. The production and
    circulation of goods had been expanding and accelerating as early as the
    nineteenth century, though at highly divergent rates from one region or
    sector to another. New markets were created in order to distribute goods
    that were being produced in greater numbers; new infrastructure for
    transportation and communication was established in order to serve these
    large markets, which were mostly in the form of national territories
    (including their colonies). This []{#Page_14 type="pagebreak"
    title="14"}enabled even larger factories to be built in order to
    exploit, to an even greater extent, the cost advantages of mass
    production. In order to control these complex processes, new professions
    arose with different types of competencies and working conditions. The
    office became a workplace for an increasing number of people -- men and
    women alike -- who, in one form or another, had something to do with
    information processing and communication. Yet all of this required not
    only new management techniques. Production and products also became more
    complex, so that entire corporate sectors had to be restructured.
    Whereas the first decisive inventions of the industrial era were still
    made by more or less educated tinkerers, during the last third of the
    nineteenth century, invention itself came to be institutionalized. In
    Germany, Siemens (founded in 1847 as the Telegraphen-Bauanstalt von
    Siemens & Halske) exemplifies this transformation. Within 50 years, a
    company that began in a proverbial workshop in a Berlin backyard became
    a multinational high-tech corporation. It was in such corporate
    laboratories, which were established around the year 1900, that the
    "industrialization of invention" or the "scientification of industrial
    production" took place.[^8^](#c1-note-0008){#c1-note-0008a} In other
    words, even the processes employed in factories and the goods that they
    produced became knowledge-intensive. Their invention, planning, and
    production required a steadily growing expansion of activities, which
    today we would refer to as research and development. The informatization
    of the economy -- the acceleration of mass production, the comprehensive
    application of scientific methods to the organization of labor, and the
    central role of research and development in industry -- was hastened
    enormously by a world war that was waged on an industrial scale to an
    extent that had never been seen before.

    Another important factor for the increasing significance of the
    knowledge economy was the development of the consumer society. Over the
    course of the last third of the nineteenth century, despite dramatic
    regional and social disparities, an increasing number of people profited
    from the economic growth that the Industrial Revolution had instigated.
    Wages increased and basic needs were largely met, so that a new social
    stratum arose, the middle class, which was able to spend part of its
    income on other things. But on what? First, []{#Page_15 type="pagebreak"
    title="15"}new needs had to be created. The more production capacities
    increased, the more they had to be rethought in terms of consumption.
    Thus, in yet another way, the economy became more knowledge-intensive.
    It was now necessary to become familiar with, understand, and stimulate
    the interests and preferences of consumers, in order to entice them to
    purchase products that they did not urgently need. This knowledge did
    little to enhance the material or logistical complexity of goods or
    their production; rather, it was reflected in the increasingly extensive
    communication about and through these goods. The beginnings of this
    development were captured by Émile Zola in his 1883 novel *The Ladies\'
    Paradise*, which was set in the new world of a semi-fictitious
    department store bearing that name. In its opening scene, the young
    protagonist Denise Baudu and her brother Jean, both of whom have just
    moved to Paris from a provincial town, encounter for the first time the
    artfully arranged women\'s clothing -- exhibited with all sorts of
    tricks involving lighting, mirrors, and mannequins -- in the window
    displays of the store. The sensuality of the staged goods is so
    overwhelming that both of them are not only struck dumb, but Jean even
    "blushes."

    It was the economy of affects that brought blood to Jean\'s cheeks. At
    that time, strategies for attracting the attention of customers did not
    yet have a scientific and systematic basis. Just as the first inventions
    in the age of industrialization were made by amateurs, so too was the
    economy of affects developed intuitively and gradually rather than as a
    planned or conscious paradigm shift. That it was possible to induce and
    direct affects by means of targeted communication was the pioneering
    discovery of the Austrian-American Edward Bernays. During the 1920s, he
    combined the ideas of his uncle Sigmund Freud about unconscious
    motivations with the sociological research methods of opinion surveys to
    form a new discipline: market
    research.[^9^](#c1-note-0009){#c1-note-0009a} It became the scientific
    basis of a new field of activity, which he at first called "propa­ganda"
    but then later referred to as "public
    relations."[^10^](#c1-note-0010){#c1-note-0010a} Public communication,
    be it for economic or political ends, was now placed on a systematic
    foundation that came to distance itself more and more from the pure
    "conveyance of information." Communication became a strategic field for
    corporate and political disputes, and the mass media []{#Page_16
    type="pagebreak" title="16"}became their locus of negotiation. Between
    1880 and 1917, for instance, commercial advertising costs in the United
    States increased by more than 800 percent, and the leading advertising
    firms, using the same techniques with which they attracted consumers to
    products, were successful in selling to the American public the idea of
    their nation entering World War I. Thus, a media industry in the modern
    sense was born, and it expanded along with the rapidly growing market
    for advertising.[^11^](#c1-note-0011){#c1-note-0011a}

    In his studies of labor markets conducted at the beginning of the 1960s,
    Machlup brought these previously separ­ate developments together and
    thus explained the existence of an already advanced knowledge economy in
    the United States. His arguments fell on extremely fertile soil, for an
    intellectual transformation had taken place in other areas of science as
    well. A few years earlier, for instance, cybernetics had given the
    concepts "information" and "communication" their first scientifically
    precise (if somewhat idiosyncratic) definitions and had assigned to them
    a position of central importance in all scientific disciplines, not to
    mention life in general.[^12^](#c1-note-0012){#c1-note-0012a} Machlup\'s
    investigation seemed to confirm this in the case of the economy, given
    that the knowledge economy was primarily concerned with information and
    communication. Since then, numerous analyses, formulas, and slogans have
    repeated, modified, refined, and criticized the idea that the
    knowledge-based activities of the economy have become increasingly
    important. In the 1970s this discussion was associated above all with
    the notion of the "post-industrial
    society,"[^13^](#c1-note-0013){#c1-note-0013a} in the 1980s the guiding
    idea was the "information society,"[^14^](#c1-note-0014){#c1-note-0014a}
    and in the 1990s the debate revolved around the "network
    society"[^15^](#c1-note-0015){#c1-note-0015a} -- to name just the most
    popular concepts. What these approaches have in common is that they each
    diagnose a comprehensive societal transformation that, as regards the
    creation of economic value or jobs, has shifted the balance from
    productive to communicative activ­ities. Accordingly, they presuppose
    that we know how to distinguish the former from the latter. This is not
    unproblematic, however, because in practice the two are usually tightly
    intertwined. Moreover, whoever maintains that communicative activities
    have taken the place of industrial production in our society has adopted
    a very narrow point of []{#Page_17 type="pagebreak" title="17"}view.
    Factory jobs have not simply disappeared; they have just been partially
    relocated outside of Western economies. The assertion that communicative
    activities are somehow of "greater value" hardly chimes with the reality
    of today\'s new "service jobs," many of which pay no more than the
    minimum wage.[^16^](#c1-note-0016){#c1-note-0016a} Critiques of this
    sort, however, have done little to reduce the effectiveness of this
    analysis -- especially its political effectiveness -- for it does more
    than simply describe a condition. It also contains a set of political
    instructions that imply or directly demand that precisely those sectors
    should be promoted that it considers economically promising, and that
    society should be reorganized accordingly. Since the 1970s, there has
    thus been a feedback loop between scientific analysis and political
    agendas. More often than not, it is hardly possible to distinguish
    between the two. Especially in Britain and the United States, the
    economic transformation of the 1980s was imposed insistently and with
    political calculation (the weakening of labor unions).

    There are, however, important differences between the developments of
    the so-called "post-industrial society" of the 1970s and those of the
    so-called "network society" of the 1990s, even if both terms are
    supposed to stress the increased significance of information, knowledge,
    and communication. With regard to the digital condition, the most
    important of these differences are the greater flexibility of economic
    activity in general and employment relations in particular, as well as
    the dismantling of social security systems. Neither phenomenon played
    much of a role in analyses of the early 1970s. The development since
    then can be traced back to two currents that could not seem more
    different from one another. At first, flexibility was demanded in the
    name of a critique of the value system imposed by bureaucratic-bourgeois
    society (including the traditional organization of the workforce). It
    originated in the new social movements that had formed in the late
    1960s. Later on, toward the end of the 1970s, it then became one of the
    central points of the neoliberal critique of the welfare state. With
    completely different motives, both sides sang the praises of autonomy
    and spontaneity while rejecting the disciplinary nature of hierarchical
    organization. They demanded individuality and diversity rather than
    conformity to prescribed roles. Experimentation, openness to []{#Page_18
    type="pagebreak" title="18"}new ideas, flexibility, and change were now
    established as fundamental values with positive connotations. Both
    movements operated with the attractive idea of personal freedom. The new
    social movements understood this in a social sense as the freedom of
    personal development and coexistence, whereas neoliberals understood it
    in an economic sense as the freedom of the market. In the 1980s, the
    neoliberal ideas prevailed in large part because some of the values,
    strategies, and methods propagated by the new social movements were
    removed from their political context and appropriated in order to
    breathe new life -- a "new spirit" -- into capitalism and thus to rescue
    industrial society from its crisis.[^17^](#c1-note-0017){#c1-note-0017a}
    An army of management consultants, restructuring experts, and new
    companies began to promote flat hierarchies, self-responsibility, and
    innovation; with these aims in mind, they set about reorganizing large
    corporations into small and flexible units. Labor and leisure were no
    longer supposed to be separated, for all aspects of a given person could
    be integrated into his or her work. In order to achieve economic success
    in this new capitalism, it became necessary for every individual to
    identify himself or herself with his or her profession. Large
    corporations were restructured in such a way that entire departments
    found themselves transformed into independent "profit centers." This
    happened in the name of creating more leeway for decision-making and of
    optimizing the entrepreneurial spirit on all levels, the goals being to
    increase value creation and to provide management with more fine-grained
    powers of intervention. These measures, in turn, created the need for
    computers and the need for them to be networked. Large corporations
    reacted in this way to the emergence of highly specialized small
    companies which, by networking and cooperating with other firms,
    succeeded in quickly and flexibly exploiting niches in the expanding
    global markets. In the management literature of the 1980s, the
    catchphrases for this were "company networks" and "flexible
    specialization."[^18^](#c1-note-0018){#c1-note-0018a} By the middle of
    the 1990s, the sociologist Manuel Castells was able to conclude that the
    actual productive entity was no longer the individual company but rather
    the network consisting of companies and corporate divisions of various
    sizes. In Castells\'s estimation, the decisive advantage of the network
    is its ability to customize its elements and their configuration
    []{#Page_19 type="pagebreak" title="19"}to suit the rapidly changing
    requirements of the "project" at
    hand.[^19^](#c1-note-0019){#c1-note-0019a} Aside from a few exceptions,
    companies in their trad­itional forms came to function above all as
    strategic control centers and as economic and legal units.

    This economic structural transformation was already well under way when
    the internet emerged as a mass medium around the turn of the millennium.
    As a consequence, change became more radical and penetrated into an
    increasing number of areas of value creation. The political agenda
    oriented itself toward the vision of "creative industries," a concept
    developed in 1997 by the newly elected British government under Tony
    Blair. A Creative Industries Task Force was established right away, and
    its first step was to identify "those activities which have their
    origins in individual creativity, skill and talent and which have the
    potential for wealth and job creation through the generation and
    exploit­ation of intellectual
    property."[^20^](#c1-note-0020){#c1-note-0020a} Like Fritz Machlup at
    the beginning of the 1960s, the task force brought together existing
    areas of activity into a new category. Such activities included
    advertising, computer games, architecture, music, arts and antique
    markets, publishing, design, software and computer services, fashion,
    television and radio, and film and video. The latter were elevated to
    matters of political importance on account of their potential to create
    wealth and jobs. Not least because of this clever presentation of
    categories -- no distinction was made between the BBC, an almighty
    public-service provider, and fledgling companies in precarious
    circumstances -- it was possible to proclaim not only that the creative
    industries were contributing a relevant portion of the nation\'s
    economic output, but also that this sector was growing at an especially
    fast rate. It was reported that, in London, the creative industries were
    already responsible for one out of every five new jobs. When compared
    with traditional terms of employment as regards income, benefits, and
    prospects for advancement, however, many of these positions entailed a
    considerable downgrade for the employees in question (who were now
    treated as independent contractors). This fact was either ignored or
    explicitly interpreted as a sign of the sector\'s particular
    dynamism.[^21^](#c1-note-0021){#c1-note-0021a} Around the turn of the
    new millennium, the idea that individual creativity plays a central role
    in the economy was given further traction by []{#Page_20
    type="pagebreak" title="20"}the sociologist and consultant Richard
    Florida, who argued that creativity was essential to the future of
    cities and even announced the rise of the "creative class." As to the
    preconditions that have to be met in order to tap into this source of
    wealth, he devised a simple formula that would be easy for municipal
    bureaucrats to understand: "technology, tolerance and talent." Talent,
    as defined by Florida, is based on individual creativity and education
    and manifests itself in the ability to generate new jobs. He was thus
    able to declare talent a central element of economic
    growth.[^22^](#c1-note-0022){#c1-note-0022a} In order to "unleash" these
    resources, what we need in addition to technology is, above all,
    tolerance; that is, "an open culture -- one that does not discriminate,
    does not force people into boxes, allows us to be ourselves, and
    validates various forms of family and of human
    identity."[^23^](#c1-note-0023){#c1-note-0023a}

    The idea that a public welfare state should ensure the social security
    of individuals was considered obsolete. Collective institutions, which
    could have provided a degree of stability for people\'s lifestyles, were
    dismissed or regarded as bureaucratic obstacles. The more or less
    directly evoked role model for all of this was the individual artist,
    who was understood as an individual entrepreneur, a sort of genius
    suitable for the masses. For Florida, a central problem was that,
    according to his own calculations, only about a third of the people
    living in North American and European cities were working in the
    "creative sector," while the innate creativity of everyone else was
    going to waste. Even today, the term "creative industry," along with the
    assumption that the internet will provide increased opportunities,
    serves to legitimize the effort to restructure all areas of the economy
    according to the needs of the knowledge economy and to privilege the
    network over the institution. In times of social cutbacks and empty
    public purses, especially in municipalities, this message was warmly
    received. One mayor, who as the first openly gay top politician in
    Germany exemplified tolerance for diverse lifestyles, even adopted the
    slogan "poor but sexy" for his city. Everyone was supposed to exploit
    his or her own creativity to discover new niches and opportunities for
    monet­ization -- a magic formula that was supposed to bring about a new
    urban revival. Today there is hardly a city in Europe that does not
    issue a report about its creative economy, []{#Page_21 type="pagebreak"
    title="21"}and nearly all of these reports cite, directly or indirectly,
    Richard Florida.

    As already seen in the context of the knowledge economy, so too in the
    case of creative industries do measurable social change, wishful
    thinking, and political agendas blend together in such a way that it is
    impossible to identify a single cause for the developments taking place.
    The consequences, however, are significant. Over the last two
    generations, the demands of the labor market have fundamentally changed.
    Higher education and the ability to acquire new knowledge independently
    are now, to an increasing extent, required and expected as
    qualifications and personal attributes. The desired or enforced ability
    to be flexible at work, the widespread cooperation across institutions,
    the uprooted nature of labor, and the erosion of collective models for
    social security have displaced many activities, which once took place
    within clearly defined institutional or personal limits, into a new
    interstitial space that is neither private nor public in the classical
    sense. This is the space of networks, communities, and informal
    cooperation -- the space of sharing and exchange that has since been
    enabled by the emergence of ubiquitous digital communication. It allows
    an increasing number of people, whether willingly or otherwise, to
    envision themselves as active producers of information, knowledge,
    capability, and meaning. And because it is associated in various ways
    with the space of market-based exchange and with the bourgeois political
    sphere, it has lasting effects on both. This interstitial space becomes
    all the more important as fewer people are willing or able to rely on
    traditional institutions for their economic security. For, within it,
    personal and digital-based networks can and must be developed as
    alternatives, regardless of whether they prove sustainable for the long
    term. As a result, more and more actors, each with their own claims to
    meaning, have been rushing away from the private personal sphere into
    this new interstitial space. By now, this has become such a normal
    practice that whoever is *not* active in this ever-expanding
    interstitial space, which is rapidly becoming the main social sphere --
    whoever, that is, lacks a publicly visible profile on social mass media
    like Facebook, or does not number among those producing information and
    meaning and is thus so inconspicuous online as []{#Page_22
    type="pagebreak" title="22"}to yield no search results -- now stands out
    in a negative light (or, in far fewer cases, acquires a certain prestige
    on account of this very absence).
    :::

    ::: {.section}
    ### The erosion of heteronormativity {#c1-sec-0004}

    In this (sometimes more, sometimes less) public space for the continuous
    production of social meaning (and its exploit­ation), there is no
    question that the professional middle class is
    over-represented.[^24^](#c1-note-0024){#c1-note-0024a} It would be
    short-sighted, however, to reduce those seeking autonomy and the
    recognition of individuality and social diversity to the role of poster
    children for the new spirit of
    capitalism.[^25^](#c1-note-0025){#c1-note-0025a} The new social
    movements, for instance, initiated a social shift that has allowed an
    increasing number of people to demand, if nothing else, the right to
    participate in social life in a self-determined manner; that is,
    according to their own standards and values.

    Especially effective was the critique of patriarchal and heteronormative
    power relations, modes of conduct, and
    identities.[^26^](#c1-note-0026){#c1-note-0026a} In the context of the
    political upheavals at the end of the 1960s, the new women\'s and gay
    movements developed into influential actors. Their greatest achievement
    was to establish alternative cultural forms, lifestyles, and strategies
    of action in or around the mainstream of society. How this was done can
    be demonstrated by tracing, for example, the development of the gay
    movement in West Germany.

    In the fall of 1969, the liberalization of Paragraph 175 of the German
    Criminal Code came into effect. From then on, sexual activity between
    adult men was no longer punishable by law (women were not mentioned in
    this context). For the first time, a man could now express himself as a
    homosexual outside of semi-private space without immediately being
    exposed to the risk of criminal prosecution. This was a necessary
    precondition for the ability to defend one\'s own rights. As early as
    1971, the struggle for the recognition of gay life experiences reached
    the broader public when Rosa von Praunheim\'s film *It Is Not the
    Homosexual Who Is Perverse, but the Society in Which He Lives* was
    screened at the Berlin International Film Festival and then, shortly
    thereafter, broadcast on public television in North Rhine-Westphalia.
    The film, which is firmly situated in the agitprop tradition,
    []{#Page_23 type="pagebreak" title="23"}follows a young provincial man
    through the various milieus of Berlin\'s gay subcultures: from a
    monogamous relationship to nightclubs and public bathrooms until, at the
    end, he is enlightened by a political group of men who explain that it
    is not possible to lead a free life in a niche, as his own emancipation
    can only be achieved by a transformation of society as a whole. The film
    closes with a not-so-subtle call to action: "Out of the closets, into
    the streets!" Von Praunheim understood this emancipation to be a process
    that encompassed all areas of life and had to be carried out in public;
    it could only achieve success, moreover, in solidarity with other
    freedom movements such as the Black Panthers in the United States and
    the new women\'s movement. The goal, according to this film, is to
    articulate one\'s own identity as a specific and differentiated identity
    with its own experiences, values, and reference systems, and to anchor
    this identity within a society that not only tolerates it but also
    recognizes it as having equal validity.

    At first, however, the film triggered vehement controversies, even
    within the gay scene. The objection was that it attacked the gay
    subculture, which was not yet prepared to defend itself publicly against
    discrimination. Despite or (more likely) because of these controversies,
    more than 50 groups of gay activists soon formed in Germany. Such
    groups, largely composed of left-wing alternative students, included,
    for instance, the Homosexuelle Aktion Westberlin (HAW) and the Rote
    Zelle Schwul (RotZSchwul) in Frankfurt am
    Main.[^27^](#c1-note-0027){#c1-note-0027a} One focus of their activities
    was to have Paragraph 175 struck entirely from the legal code (which was
    not achieved until 1994). This cause was framed within a general
    struggle to overcome patriarchy and capitalism. At the earliest gay
    demonstrations in Germany, which took place in Münster in April 1972,
    protesters rallied behind the following slogan: "Brothers and sisters,
    gay or not, it is our duty to fight capitalism." This was understood as
    a necessary subordination to the greater struggle against what was known
    in the terminology of left-wing radical groups as the "main
    contradiction" of capitalism (that between capital and labor), and it
    led to strident differences within the gay movement. The dispute
    escalated during the next year. After the so-called *Tuntenstreit*, or
    "Battle of the Queens," which was []{#Page_24 type="pagebreak"
    title="24"}initiated by activists from Italy and France who had appeared
    in drag at the closing ceremony of the HAW\'s Spring Meeting in West
    Berlin, the gay movement was divided, or at least moving in a new
    direction. At the heart of the matter were the following questions: "Is
    there an inherent (many speak of an autonomous) position that gays hold
    with respect to the issue of homosexuality? Or can a position on
    homosexuality only be derived in association with the traditional
    workers\' movement?"[^28^](#c1-note-0028){#c1-note-0028a} In other
    words, was discrimination against homosexuality part of the social
    divide caused by capitalism (that is, one of its "ancillary
    contradictions") and thus only to be overcome by overcoming capitalism
    itself, or was it something unrelated to the "essence" of capitalism, an
    independent conflict requiring different strategies and methods? This
    conflict could never be fully resolved, but the second position, which
    was more interested in overcoming legal, social, and cultural
    discrimination than in struggling against economic exploitation, and
    which focused specifically on the social liberation of gays, proved to
    be far more dynamic in the long term. This was not least because both
    the old and new left were themselves not free of homophobia and because
    the entire radical student movement of the 1970s fell into crisis.

    Over the course of the 1970s and 1980s, "aesthetic self-empowerment" was
    realized through the efforts of artistic and (increasingly) commercial
    producers of images, texts, and
    sounds.[^29^](#c1-note-0029){#c1-note-0029a} Activists, artists, and
    intellectuals developed a language with which they could speak
    assertively in public about topics that had previously been taboo.
    Inspired by the expression "gay pride," which originated in the United
    States, they began to use the term *schwul* ("gay"), which until then
    had possessed negative connotations, with growing confidence. They
    founded numerous gay and lesbian cultural initiatives, theaters,
    publishing houses, magazines, bookstores, meeting places, and other
    associations in order to counter the misleading or (in their eyes)
    outright false representations of the mass media with their own
    multifarious media productions. In doing so, they typically followed a
    dual strategy: on the one hand, they wanted to create a space for the
    members of the movement in which it would be possible to formulate and
    live different identities; on the other hand, they were fighting to be
    accepted by society at large. While []{#Page_25 type="pagebreak"
    title="25"}a broader and broader spectrum of gay positions, experiences,
    and aesthetics was becoming visible to the public, the connection to
    left-wing radical contexts became weaker. Founded as early as 1974, and
    likewise in West Berlin, the General Homosexual Working Group
    (Allgemeine Homosexuelle Arbeitsgemeinschaft) sought to integrate gay
    politics into mainstream society by defining the latter -- on the basis
    of bourgeois, individual rights -- as a "politics of
    anti-discrimination." These efforts achieved a milestone in 1980 when,
    in the run-up to the parliamentary election, a podium discussion was
    held with representatives of all major political parties on the topic of
    the law governing sexual offences. The discussion took place in the
    Beethovenhalle in Bonn, which was the largest venue for political events
    in the former capital. Several participants considered the event to be a
    "disaster,"[^30^](#c1-note-0030){#c1-note-0030a} for it revived a number
    of internal conflicts (not least that between revolutionary and
    integrative positions). Yet the fact remains that representatives were
    present from every political party, and this alone was indicative of an
    unprecedented amount of public awareness for those demanding equal
    rights.

    The struggle against discrimination and for social recognition reached
    an entirely new level of urgency with the outbreak of HIV/AIDS. In 1983,
    the magazine *Der Spiegel* devoted its first cover story to the disease,
    thus bringing it to the awareness of the broader public. In the same
    year, the non-profit organization Deutsche Aids-Hilfe was founded to
    prevent further cases of discrimination, for *Der Spiegel* was not the
    only publication at the time to refer to AIDS as a "homosexual
    epidemic."[^31^](#c1-note-0031){#c1-note-0031a} The struggle against
    HIV/AIDS required a comprehensive mobilization. Funding had to be raised
    in order to deal with the social repercussions of the epidemic, to teach
    people about safe sexual practices for everyone and to direct research
    toward discovering causes and developing potential cures. The immediate
    threat that AIDS represented, especially while so little was known about
    the illness and its treatment remained a distant hope, created an
    impetus for mobilization that led to alliances between the gay movement,
    the healthcare system, and public authorities. Thus, the AIDS Inquiry
    Committee, sponsored by the conservative Christian Democratic Union,
    concluded in 1988 that, in the fight against the illness, "the
    homosexual subculture is []{#Page_26 type="pagebreak"
    title="26"}especially important. This informal structure should
    therefore neither be impeded nor repressed but rather, on the contrary,
    recognized and supported."[^32^](#c1-note-0032){#c1-note-0032a} The AIDS
    crisis proved to be a catalyst for advancing the integration of gays
    into society and for expanding what could be regarded as acceptable
    lifestyles, opinions, and cultural practices. As a consequence,
    homosexuals began to appear more frequently in the media, though their
    presence would never match that of hetero­sexuals. As of 1985, the
    television show *Lindenstraße* featured an openly gay protagonist, and
    the first kiss between men was aired in 1987. The episode still provoked
    a storm of protest -- Bayerische Rundfunk refused to broadcast it a
    second time -- but this was already a rearguard action and the
    integration of gays (and lesbians) into the social mainstream continued.
    In 1993, the first gay and lesbian city festival took place in Berlin,
    and the first Rainbow Parade was held in Vienna in 1996. In 2002, the
    Cologne Pride Day involved 1.2 million participants and attendees, thus
    surpassing for the first time the attendance at the traditional Rose
    Monday parade. By the end of the 1990s, the sociologist Rüdiger Lautmann
    was already prepared to maintain: "To be homosexual has become
    increasingly normalized, even if homophobia lives on in the depths of
    the collective disposition."[^33^](#c1-note-0033){#c1-note-0033a} This
    normalization was also reflected in a study published by the Ministry of
    Justice in the year 2000, which stressed "the similarity between
    homosexual and heterosexual relationships" and, on this basis, made an
    argument against discrimination.[^34^](#c1-note-0034){#c1-note-0034a}
    Around the year 2000, however, the classical gay movement had already
    passed its peak. A profound transformation had begun to take place in
    the middle of the 1990s. It lost its character as a new social movement
    (in the style of the 1970s) and began to splinter inwardly and
    outwardly. One could say that it transformed from a mass movement into a
    multitude of variously networked communities. The clearest sign of this
    transformation is the abbreviation "LGBT" (lesbian, gay, bisexual, and
    transgender), which, since the mid-1990s, has represented the internal
    heterogeneity of the movement as it has shifted toward becoming a
    network.[^35^](#c1-note-0035){#c1-note-0035a} At this point, the more
    radical actors were already speaking against the normalization of
    homosexuality. Queer theory, for example, was calling into question the
    "essentialist" definition of gender []{#Page_27 type="pagebreak"
    title="27"}-- that is, any definition reducing it to an immutable
    essence -- with respect to both its physical dimension (sex) and its
    social and cultural dimension (gender
    proper).[^36^](#c1-note-0036){#c1-note-0036a} It thus opened up a space
    for the articulation of experiences, self-descriptions, and lifestyles
    that, on every level, are located beyond the classical attributions of
    men and women. A new generation of intellectuals, activists, and artists
    took the stage and developed -- yet again through acts of aesthetic
    self-empowerment -- a language that enabled them to import, with
    confidence, different self-definitions into the public sphere. An
    example of this is the adoption of inclusive plural forms in German
    (*Aktivist\_innen* "activists," *Künstler\_innen* "artists"), which draw
    attention to the gaps and possibilities between male and female
    identities that are also expressed in the language itself. Just as with
    the terms "gay" or *schwul* some 30 years before, in this case, too, an
    important element was the confident and public adoption and semantic
    conversion of a formerly insulting word ("queer") by the very people and
    communities against whom it used to be
    directed.[^37^](#c1-note-0037){#c1-note-0037a} Likewise observable in
    these developments was the simultaneity of social (amateur) and
    artistic/scientific (professional) cultural production. The goal,
    however, was less to produce a clear antithesis than it was to oppose
    rigid attributions by underscoring mutability, hybridity, and
    uniqueness. Both the scope of what could be expressed in public and the
    circle of potential speakers expanded yet again. And, at least to some
    extent, the drag queen Conchita Wurst popularized complex gender
    constructions that went beyond the simple woman/man dualism. All of that
    said, the assertion by Rüdiger Lautmann quoted above -- "homophobia
    lives on in the depths of the collective dis­position" -- continued to
    hold true.

    If the gay movement is representative of the social liber­ation of the
    1970s and 1980s, then it is possible to regard its transformation into
    the LGBT movement during the 1990s -- with its multiplicity and fluidity
    of identity models and its stress on mutability and hybridity -- as a
    sign of the reinvention of this project within the context of an
    increasingly dominant digital condition. With this transformation,
    however, the diversification and fluidification of cultural practices
    and social roles have not yet come to an end. Ways of life that were
    initially subcultural and facing existential pressure []{#Page_28
    type="pagebreak" title="28"}are gradually entering the mainstream. They
    are expanding the range of readily available models of identity for
    anyone who might be interested, be it with respect to family forms
    (e.g., patchwork families, adoption by same-sex couples), diets (e.g.,
    vegetarianism and veganism), healthcare (e.g., anti-vaccination), or
    other principles of life and belief. All of them are seeking public
    recognition for a new frame of reference for social meaning that has
    originated from their own activity. This is necessarily a process
    characterized by conflicts and various degrees of resistance, including
    right-wing populism that seeks to defend "traditional values," but many
    of these movements will ultimately succeed in providing more people with
    the opportunity to speak in public, thus broadening the palette of
    themes that are considered to be important and legitimate.
    :::

    ::: {.section}
    ### Beyond center and periphery {#c1-sec-0005}

    In order to reach a better understanding of the complexity involved in
    the expanding social basis of cultural production, it is necessary to
    shift yet again to a different level. For, just as it would be myopic to
    examine the multiplication of cultural producers only in terms of
    professional knowledge workers from the middle class, it would likewise
    be insufficient to situate this multiplication exclusively in the
    centers of the West. The entire system of categories that justified the
    differentiation between the cultural "center" and the cultural
    "periphery" has begun to falter. This complex and multilayered process
    has been formulated and analyzed by the theory of "post-colonialism."
    Long before digital media made the challenge of cultural multiplicity a
    quotidian issue in the West, proponents of this theory had developed
    languages and terminologies for negotiating different positions without
    needing to impose a hierarchical order.

    Since the 1970s, the theoretical current of post-colonialism has been
    examining the cultural and epistemic dimensions of colonialism that,
    even after its end as a territorial system, have remained responsible
    for the continuation of dependent relations and power differentials. For
    my purposes -- which are to develop a European perspective on the
    factors ensuring that more and more people are able to participate in
    cultural []{#Page_29 type="pagebreak" title="29"}production -- two
    points are especially relevant because their effects reverberate in
    Europe itself. First is the deconstruction of the categories "West" (in
    the sense of the center) and "East" (in the sense of the periphery). And
    second is the focus on hybridity as a specific way for non-Western
    actors to deal with the dominant cultures of former colonial powers,
    which have continued to determine significant portions of globalized
    culture. The terms "West" and "East," "center" and "periphery," do not
    simply describe existing conditions; rather, they are categories that
    contribute, in an important way, to the creation of the very conditions
    that they presume to describe. This may sound somewhat circular, but it
    is precisely from this circularity that such cultural classifications
    derive their strength. The world that they illuminate is immersed in
    their own light. The category "East" -- or, to use the term of the
    literary theorist Edward Said,
    "orientalism"[^38^](#c1-note-0038){#c1-note-0038a} -- is a system of
    representation that pervades Western thinking. Within this system,
    Europe or the West (as the center) and the East (as the periphery)
    represent asymmetrical and antithetical concepts. This construction
    achieves a dual effect. As a self-description, on the one hand, it
    contributes to the formation of our own identity, for Europeans
    attrib­ute to themselves and to their continent such features as
    "rationality," "order," and "progress," while on the other hand
    identifying the alternative with "superstition," "chaos," or
    "stagnation." The East, moreover, is used as an exotic projection screen
    for our own suppressed desires. According to Said, a representational
    system of this sort can only take effect if it becomes "hegemonic"; that
    is, if it is perceived as self-evident and no longer as an act of
    attribution but rather as one of description, even and precisely by
    those against whom the system discriminates. Said\'s accomplishment is
    to have worked out how far-reaching this system was and, in many areas,
    it remains so today. It extended (and extends) from scientific
    disciplines, whose researchers discussed (until the 1980s) the theory of
    "oriental despotism,"[^39^](#c1-note-0039){#c1-note-0039a} to literature
    and art -- the motif of the harem was especially popular, particularly
    in paintings of the late nineteenth
    century[^40^](#c1-note-0040){#c1-note-0040a} -- all the way to everyday
    culture, where, as of 1913 in the United States, the cigarette brand
    Camel (introduced to compete with the then-leading brand, Fatima) was
    meant to evoke the []{#Page_30 type="pagebreak" title="30"}mystique and
    sensuality of the Orient.[^41^](#c1-note-0041){#c1-note-0041a} This
    system of representation, however, was more than a means of describing
    oneself and others; it also served to legitimize the allocation of all
    knowledge and agency on to one side, that of the West. Such an order was
    not restricted to culture; it also created and legitimized a sense of
    domination for colonial projects.[^42^](#c1-note-0042){#c1-note-0042a}
    This cultural legitimation, as Said points out, also persists after the
    end of formal colonial domination and continues to marginalize the
    postcolonial subjects. As before, they are unable to speak for
    themselves and therefore remain in the dependent periphery, which is
    defined by their subordinate position in relation to the center. Said
    directed the focus of critique to this arrangement of center and
    periphery, which he saw as being (re)produced and legitimized on the
    cultural level. From this arose the demand that everyone should have the
    right to speak, to place him- or herself in the center. To achieve this,
    it was necessary first of all to develop a language -- indeed, a
    cultural landscape -- that can manage without a hegemonic center and is
    thus oriented toward multiplicity instead of
    uniformity.[^43^](#c1-note-0043){#c1-note-0043a}

    A somewhat different approach has been taken by the literary theorist
    Homi K. Bhabha. He proceeds from the idea that the colonized never fully
    passively adopt the culture of the colonialists -- the "English book,"
    as he calls it. Their previous culture is never simply wiped out and
    replaced by another. What always and necessarily occurs is rather a
    process of hybridization. This concept, according to Bhabha,

    ::: {.extract}
    suggests that all of culture is constructed around negotiations and
    conflicts. Every cultural practice involves an attempt -- sometimes
    good, sometimes bad -- to establish authority. Even classical works of
    art, such as a painting by Brueghel or a composition by Beethoven, are
    concerned with the establishment of cultural authority. Now, this poses
    the following question: How does one function as a negotiator when
    one\'s own sense of agency is limited, for instance, on account of being
    excluded or oppressed? I think that, even in the role of the underdog,
    there are opportunities to upend the imposed cultural authorities -- to
    accept some aspects while rejecting others. It is in this way that
    symbols of authority are hybridized and made into something of one\'s
    own. For me, hybridization is not simply a mixture but rather a
    []{#Page_31 type="pagebreak" title="31"}strategic and selective
    appropriation of meanings; it is a way to create space for negotiators
    whose freedom and equality are
    endangered.[^44^](#c1-note-0044){#c1-note-0044a}
    :::

    Hybridization is thus a cultural strategy for evading marginality that
    is imposed from the outside: subjects, who from the dominant perspective
    are incapable of doing so, appropriate certain aspects of culture for
    themselves and transform them into something else. What is decisive is
    that this hybrid, created by means of active and unauthorized
    appropriation, opposes the dominant version and the resulting speech is
    thus legitimized from another -- that is, from one\'s own -- position.
    In this way, a cultural engagement is set under way and the superiority
    of one meaning or another is called into question. Who has the right to
    determine how and why a relationship with others should be entered,
    which resources should be appropriated from them, and how these
    resources should be used? At the heart of the matter lie the abilities
    of speech and interpretation; these can be seized in order to create
    space for a "cultural hybridity that entertains difference without an
    assumed or imposed hierarchy."[^45^](#c1-note-0045){#c1-note-0045a}

    At issue is thus a strategy for breaking down hegemonic cultural
    conditions, which distribute agency in a highly uneven manner, and for
    turning one\'s own cultural production -- which has been dismissed by
    cultural authorities as flawed, misconceived, or outright ignorant --
    into something negotiable and independently valuable. Bhabha is thus
    interested in fissures, differences, diversity, multiplicity, and
    processes of negotiation that generate something like shared meaning --
    culture, as he defines it -- instead of conceiving of it as something
    that precedes these processes and is threatened by them. Accordingly, he
    proceeds not from the idea of unity, which is threatened whenever
    "others" are empowered to speak and needs to be preserved, but rather
    from the irreducible multiplicity that, through laborious processes, can
    be brought into temporary and limited consensus. Bhabha\'s vision of
    culture is one without immutable authorities, interpretations, and
    truths. In theory, everything can be brought to the table. This is not a
    situation in which anything goes, yet the central meaning of
    negotiation, the contextuality of consensus, and the mutability of every
    frame of reference []{#Page_32 type="pagebreak" title="32"}-- none of
    which can be shared equally by everyone -- are always potentially
    negotiable.

    Post-colonialism draws attention to the "disruptive power of the
    excluded-included third," which becomes especially virulent when it
    "emerges in the middle of semantic
    structures."[^46^](#c1-note-0046){#c1-note-0046a} The recognition of
    this power reveals the increasing cultural independence of those
    formerly colonized, and it also transforms the cultural self-perception
    of the West, for, even in Western nations that were not significant
    colonial powers, there are multifaceted tensions between dominant
    cultures and those who are on the defensive against discrimination and
    attributions by others. Instead of relying on the old recipe of
    integration through assimilation (that is, the dissolution of the
    "other"), the right to self-determined difference is being called for
    more emphatically. In such a manner, collective identities, such as
    national identities, are freed from their questionable appeals to
    cultural homogeneity and essentiality, and reconceived in terms of the
    experience of immanent difference. Instead of one binding and
    unnegotiable frame of reference for everyone, which hierarchizes
    individual pos­itions and makes them appear unified, a new order without
    such limitations needs to be established. Ultimately, the aim is to
    provide nothing less than an "alternative reading of
    modernity,"[^47^](#c1-note-0047){#c1-note-0047a} which influences both
    the construction of the past and the modalities of the future. For
    European culture in particular, such a project is an immense challenge.

    Of course, these demands do not derive their everyday relevance
    primarily from theory but rather from the experiences of
    (de)colonization, migration, and globalization. Multifaceted as it is,
    however, the theory does provide forms and languages for articulating
    these phenomena, legitimizing new positions in public debates, and
    attacking persistent mechanisms of cultural marginalization. It helps to
    empower broader societal groups to become actively involved in cultural
    processes, namely people, such as migrants and their children, whose
    identity and experience are essentially shaped by non-Western cultures.
    The latter have been giving voice to their experiences more frequently
    and with greater confidence in all areas of public life, be it in
    politics, literature, music, or
    art.[^48^](#c1-note-0048){#c1-note-0048a} In Germany, for instance, the
    films by Fatih Akin (*Head-On* from 2004 and *Soul Kitchen* from 2009,
    to []{#Page_33 type="pagebreak" title="33"}name just two), in which the
    experience of immigration is represented as part of the German
    experience, have reached a wide public audience. In 2002, the group
    Kanak Attak organized a series of conferences with the telling motto *no
    integración*, and these did much to introduce postcolonial positions to
    the debates taking place in German-speaking
    countries.[^49^](#c1-note-0049){#c1-note-0049a} For a long time,
    politicians with "migration backgrounds" were considered to be competent
    in only one area, namely integration policy. This has since changed,
    though not entirely. In 2008, for instance, Cem Özdemir was elected
    co-chair of the Green Party and thus shares responsibility for all of
    its political positions. Developments of this sort have been enabled
    (and strengthened) by a shift in society\'s self-perception. In 2014,
    Cemile Giousouf, the integration commissioner for the conservative
    CDU/CSU alliance in the German Parliament, was able to make the
    following statement without inciting any controversy: "Over the past few
    years, Germany has become a modern land of
    immigration."[^50^](#c1-note-0050){#c1-note-0050a} A remarkable
    proclamation. Not ten years earlier, her party colleague Norbert Lammert
    had expressed, in his function as parliamentary president, interest in
    reviving the debate about the term "leading culture." The increasingly
    well-educated migrants of the first, second, or third gener­ation no
    longer accept the choice of being either marginalized as an exotic
    representative of the "other" or entirely assimilated. Rather, they are
    insisting on being able to introduce their specific experience as a
    constitutive contribution to the formation of the present -- in
    association and in conflict with other contributions, but at the same
    level and with the same legitimacy. It is no surprise that various forms
    of discrimin­ation and violence against "foreigners" not only continue
    in everyday life but have also been increasing in reaction to this new
    situation. Ultimately, established claims to power are being called into
    question.

    To summarize, at least three secular historical tendencies or movements,
    some of which can be traced back to the late nineteenth century but each
    of which gained considerable momentum during the last third of the
    twentieth (the spread of the knowledge economy, the erosion of
    heteronormativity, and the focus of post-colonialism on cultural
    hybridity), have greatly expanded the sphere of those who actively
    negotiate []{#Page_34 type="pagebreak" title="34"}social meaning. In
    large part, the patterns and cultural foundations of these processes
    developed long before the internet. Through the use of the internet, and
    through the experiences of dealing with it, they have encroached upon
    far greater portions of all societies.
    :::
    :::

    ::: {.section}
    The Culturalization of the World {#c1-sec-0006}
    --------------------------------

    The number of participants in cultural processes, however, is not the
    only thing that has increased. Parallel to that development, the field
    of the cultural has expanded as well -- that is, those areas of life
    that are not simply characterized by unalterable necessities, but rather
    contain or generate competing options and thus require conscious
    decisions.

    The term "culturalization of the economy" refers to the central position
    of knowledge-based, meaning-based, and affect-oriented processes in the
    creation of value. With the emergence of consumption as the driving
    force behind the production of goods and the concomitant necessity of
    having not only to satisfy existing demands but also to create new ones,
    the cultural and affective dimensions of the economy began to gain
    significance. I have already discussed the beginnings of product
    staging, advertising, and public relations. In addition to all of the
    continuities that remain with us from that time, it is also possible to
    point out a number of major changes that consumer society has undergone
    since the late 1960s. These changes can be delineated by examining the
    greater role played by design, which has been called the "core
    discipline of the creative
    economy."[^51^](#c1-note-0051){#c1-note-0051a}

    As a field of its own, design originated alongside industrialization,
    when, in collaborative processes, the activities of planning and
    designing were separated from those of carrying out
    production.[^52^](#c1-note-0052){#c1-note-0052a} It was not until the
    modern era that designers consciously endeavored to seek new forms for
    the logic inherent to mass production. With the aim of economic
    efficiency, they intended their designs to optimize the clearly defined
    functions of anonymous and endlessly reproducible objects. At the end of
    the nineteenth century, the architect Louis Sullivan, whose buildings
    still distinguish the skyline of Chicago, condensed this new attitude
    into the famous axiom []{#Page_35 type="pagebreak" title="35"}"form
    follows function." Mies van der Rohe, working as an architect in Chicago
    in the middle of the twentieth century, supplemented this with a pithy
    and famous formulation of his own: "less is more." The rationality of
    design, in the sense of isolating and improving specific functions, and
    the economical use of resources were of chief importance to modern
    (industrial) designers. Even the ten design principles of Dieter Rams,
    who led the design division of the consumer products company Braun from
    1965 to 1991 -- one of the main sources of inspiration for Jonathan Ive,
    Apple\'s chief design officer -- aimed to make products "usable,"
    "understandable," "honest," and "long-lasting." "Good design," according
    to his guiding principle, "is as little design as
    possible."[^53^](#c1-note-0053){#c1-note-0053a} This orientation toward
    the technical and functional promised to solve problems for everyone in
    a long-term and binding manner, for the inherent material and design
    qual­ities of an object were supposed to make it independent from
    changing times and from the tastes of consumers.

    ::: {.section}
    ### Beyond the object {#c1-sec-0007}

    At the end of the 1960s, a new generation of designers rebelled against
    this industrial and instrumental rationality, which was now felt to be
    authoritarian, soulless, and reductionist. In the works associated with
    "anti-design" or "radical design," the objectives of the discipline were
    redefined and a new formal language was developed. In the place of
    tech­nical and functional optimization, recombination -- ecological
    recycling or the postmodern interplay of forms -- emerged as a design
    method and aesthetic strategy. Moreover, the aspiration of design
    shifted from the individual object to its entire social and material
    environment. The processes of design and production, which had been
    closed off from one another and restricted to specialists, were opened
    up precisely to encourage the participation of non-designers, be it
    through interdisciplinary cooperation with other types of professions or
    through the empowerment of laymen. The objectives of design were
    radically expanded: rather than ending with the completion of an
    individual product, it was now supposed to engage with society. In the
    sense of cybernetics, this was regarded as a "system," controlled by
    feedback processes, []{#Page_36 type="pagebreak" title="36"}which
    connected social, technical, and biological dimensions to one
    another.[^54^](#c1-note-0054){#c1-note-0054a} Design, according to this
    new approach, was meant to be a "socially significant
    activity."[^55^](#c1-note-0055){#c1-note-0055a}

    Embedded in the social movements of the 1960s and 1970s, this new
    generation of designers was curious about the social and political
    potential of their discipline, and about possibilities for promoting
    flexibility and autonomy instead of rigid industrial efficiency. Design
    was no longer expected to solve problems once and for all, for such an
    idea did not correspond to the self-perception of an open and mutable
    society. Rather, it was expected to offer better opportun­ities for
    enabling people to react to continuously changing conditions. A radical
    proposal was developed by the Italian designer Enzo Mari, who in 1974
    published his handbook *Autoprogettazione* (Self-Design). It contained
    19 simple designs with which people could make, on their own,
    aesthetically and functionally sophisticated furniture out of pre-cut
    pieces of wood. In this case, the designs themselves were less important
    than the critique of conventional design as elitist and of consumer
    society as alienated and wasteful. Mari\'s aim was to reconceive the
    relations among designers, the manufacturing industry, and users.
    Increasingly, design came to be understood as a holistic and open
    process. Victor Papanek, the founder of ecological design, took things a
    step further. For him, design was "basic to all human activity. The
    planning and patterning of any act towards a desired, foreseeable end
    constitutes the design process. Any attempt to separate design, to make
    it a thing-by-itself, works counter to the inherent value of design as
    the primary underlying matrix of
    life."[^56^](#c1-note-0056){#c1-note-0056a}

    Potentially all aspects of life could therefore fall under the purview
    of design. This came about from the desire to oppose industrialism,
    which was blind to its catastrophic social and ecological consequences,
    with a new and comprehensive manner of seeing and acting that was
    unrestricted by economics.

    Toward the end of the 1970s, this expanded notion of design owed less
    and less to emancipatory social movements, and its socio-political goals
    began to fall by the wayside. Three fundamental patterns survived,
    however, which go beyond design and remain characteristic of the
    culturalization []{#Page_37 type="pagebreak" title="37"}of the economy:
    the discovery of the public as emancipated users and active
    participants; the use of appropriation, transformation, and
    recombination as methods for creating ever-new aesthetic
    differentiations; and, finally, the intention of shaping the lifeworld
    of the user.[^57^](#c1-note-0057){#c1-note-0057a}

    As these patterns became depoliticized and commercialized, the focus of
    designing the "lifeworld" shifted more and more toward designing the
    "experiential world." By the end of the 1990s, this had become so
    normalized that even management consultants could assert that
    "\[e\]xperiences represent an existing but previously unarticulated
    *genre of economic output*."[^58^](#c1-note-0058){#c1-note-0058a} It was
    possible to define the dimensions of the experiential world in various
    ways. For instance, it could be clearly delimited and product-oriented,
    like the flagship stores introduced by Nike in 1990, which, with their
    elaborate displays, were meant to turn shopping into an experience. This
    experience, as the company\'s executives hoped, radiated outward and
    influenced how the brand was perceived as a whole. The experiential
    world could also, however, be conceived in somewhat broader terms, for
    instance by design­ing entire institutions around the idea of creating a
    more attractive work environment and thereby increasing the commitment
    of employees. This approach is widespread today in creative industries
    and has become popularized through countless stories about ping-pong
    tables, gourmet cafeterias, and massage rooms in certain offices. In
    this case, the process of creativity is applied back to itself in order
    to systematize and optimize a given workplace\'s basis of operation. The
    development is comparable to the "invention of invention" that
    characterized industrial research around the end of the nineteenth
    century, though now the concept has been re­located to the field of
    knowledge production.

    Yet the "experiential world" can be expanded even further, for instance
    when entire cities attempt to make themselves attractive to
    international clientele and compete with others by building spectacular
    museums or sporting arenas. Displays in cities, as well as a few other
    central locations, are regularly constructed in order to produce a
    particular experience. This also entails, however, that certain forms of
    use that fail to fit the "urban
    script"[^59^](#c1-note-0059){#c1-note-0059a} are pushed to the margins
    or driven away.[^60^](#c1-note-0060){#c1-note-0060a} Thus, today, there
    is hardly a single area of life to []{#Page_38 type="pagebreak"
    title="38"}which the strategies and methods of design do not have
    access, and this access occurs at all levels. For some time, design has
    not been a purely visible matter, restricted to material objects; it
    rather forms and controls all of the senses. Cities, for example, have
    come to be understood increasingly as "sound spaces" and have
    accordingly been reconfigured with the goal of modulating their various
    noises.[^61^](#c1-note-0061){#c1-note-0061a} Yet design is no longer
    just a matter of objects, processes, and experiences. By now, in the
    context of reproductive medicine, it has even been applied to the
    biological foundations of life ("designer babies"). I will revisit this
    topic below.
    :::

    ::: {.section}
    ### Culture everywhere {#c1-sec-0008}

    Of course, design is not the only field of culture that has imposed
    itself over society as a whole. A similar development has occurred in
    the field of advertising, which, since the 1970s, has been integrated
    into many more physical and social spaces and by now has a broad range
    of methods at its disposal. Advertising is no longer found simply on
    billboards or in display windows. In the form of "guerilla marketing" or
    "product placement," it has penetrated every space and occupied every
    discourse -- by blending with political messages, for instance -- and
    can now even be spread, as "viral marketing," by the addressees of the
    advertisements themselves. Similar processes can be observed in the
    fields of art, fashion, music, theater, and sports. This has taken place
    perhaps most radically in the field of "gaming," which has drawn upon
    technical progress in the most direct possible manner and, with the
    spread of powerful computers and mobile applications, has left behind
    the confines of the traditional playing field. In alternate reality
    games, the realm of the virtual and fictitious has also been
    transcended, as physical spaces have been overlaid with their various
    scripts.[^62^](#c1-note-0062){#c1-note-0062a}

    This list could be extended, but the basic trend is clear enough,
    especially as the individual fields overlap and mutually influence one
    another. They are blending into a single interdependent field for
    generating social meaning in the form of economic activity. Moreover,
    through digitalization and networking, many new opportunities have
    arisen for large-scale involvement by the public in design processes.
    Thanks []{#Page_39 type="pagebreak" title="39"}to new communication
    technologies and flexible production processes, today\'s users can
    personalize and create products to suit their wishes. Here, the spectrum
    extends from tiny batches of creative-industrial products all the way to
    global processes of "mass customization," in which factory-based mass
    production is combined with personalization. One of the first
    applications of this was introduced in 1999 when, through its website, a
    sporting-goods company allowed customers to design certain elements of a
    shoe by altering it within a set of guidelines. This was taken a step
    further by the idea of "user-centered innovation," which relies on the
    specific knowledge of users to enhance a product, with the additional
    hope of discovering unintended applications and transforming these into
    new areas of business.[^63^](#c1-note-0063){#c1-note-0063a} It has also
    become possible for end users to take over the design process from the
    beginning, which has become considerably easier with the advent of
    specialized platforms for exchanging knowledge, alongside semi-automated
    production tools such as mechanical mills and 3D printers.
    Digitalization, which has allowed all content to be processed, and
    networking, which has created an endless amount of content ("raw
    material"), have turned appropriation and recombination into general
    methods of cultural production.[^64^](#c1-note-0064){#c1-note-0064a}
    This phenomenon will be examined more closely in the next chapter.

    Both the involvement of users in the production process and the methods
    of appropriation and recombination are extremely information-intensive
    and communication-intensive. Without the corresponding technological
    infrastructure, neither could be achieved efficiently or on a large
    scale. This was evident in the 1970s, when such approaches never made it
    beyond subcultures and conceptual studies. With today\'s search engines,
    every single user can trawl through an amount of information that, just
    a generation ago, would have been unmanageable even by professional
    archivists. A broad array of communication platforms (together with
    flexible production capacities and efficient logistics) not only weakens
    the contradiction between mass fabrication and personalization; it also
    allows users to network directly with one another in order to develop
    specialized knowledge together and thus to enable themselves to
    intervene directly in design processes, both as []{#Page_40
    type="pagebreak" title="40"}willing participants in and as critics of
    flexible global production processes.
    :::
    :::

    ::: {.section}
    The Technologization of Culture {#c1-sec-0009}
    -------------------------------

    That society is dependent on complex information technologies in order
    to organize its constitutive processes is, in itself, nothing new.
    Rather, this began as early as the late nineteenth century. It is
    directly correlated with the expansion and acceleration of the
    circulation of goods, which came about through industrialization. As the
    historian and sociologist James Beniger has noted, this led to a
    "control crisis," for administrative control centers were faced with the
    problem of losing sight of what was happening in their own factories,
    with their suppliers, and in the important markets of the time.
    Management was in a bind: decisions had to be made either on the basis
    of insufficient information or too late. The existing administrative and
    control mechanisms could no longer deal with the rapidly increasing
    complexity and time-sensitive nature of extensively organized production
    and distribution. The office became more important, and ever more people
    were needed there to fulfill a growing number of functions. Yet this was
    not enough for the crisis to subside. The old administrative methods,
    which involved manual information processing, simply could no longer
    keep up. The crisis reached its first dramatic peak in 1889 in the
    United States, with the realization that the census data from the year
    1880 had not yet been analyzed when the next census was already
    scheduled to take place during the subsequent year. In the same year,
    the Secretary of the Interior organized a conference to investigate
    faster methods of data processing. Two methods were tested for making
    manual labor more efficient, one of which had the potential to achieve
    greater efficiency by means of novel data-processing machines. The
    latter system emerged as the clear victor; developed by an engineer
    named Hermann Hollerith, it mechanically processed and stored data on
    punch cards. The idea was based on Hollerith\'s observations of the
    coup­ling and decoupling of railroad cars, which he interpreted as
    modular units that could be combined in any desired order. The punch
    card transferred this approach to information []{#Page_41
    type="pagebreak" title="41"}management. Data were no longer stored in
    fixed, linear arrangements (tables and lists) but rather in small units
    (the punch cards) that, like railroad cars, could be combined in any
    given way. The increase in efficiency -- with respect to speed *and*
    flexibility -- was enormous, and nearly a hundred of Hollerith\'s
    machines were used by the Census
    Bureau.[^65^](#c1-note-0065){#c1-note-0065a} This marked a turning point
    in the history of information processing, with technical means no longer
    being used exclusively to store data, but to process data as well. This
    was the only way to avoid the impending crisis, ensuring that
    bureaucratic management could maintain centralized control. Hollerith\'s
    machines proved to be a resounding success and were implemented in many
    more branches of government and corporate administration, where
    data-intensive processes had increased so rapidly they could not have
    been managed without such machines. This growth was accompanied by that
    of Hollerith\'s Tabulating Machine Company, which he founded in 1896 and
    which, after a number of mergers, was renamed in 1924 as the
    International Business Machines Corporation (IBM). Throughout the
    following decades, dependence on information-processing machines only
    deepened. The growing number of social, commercial, and military
    processes could only be managed by means of information technology. This
    largely took place, however, outside of public view, namely in the
    specialized divisions of large government and private organizations.
    These were the only institutions in command of the necessary resources
    for operating the complex technical infrastructure -- so-called
    mainframe computers -- that was essential to automatic information
    processing.

    ::: {.section}
    ### The independent media {#c1-sec-0010}

    As with so much else, this situation began to change in the 1960s. Mass
    media and information-processing technologies began to attract
    criticism, even though all of the involved subcultures, media activists,
    and hackers continued to act independently from one another until the
    1990s. The freedom-oriented social movements of the 1960s began to view
    the mass media as part of the political system against which they were
    struggling. The connections among the economy, politics, and the media
    were becoming more apparent, not []{#Page_42 type="pagebreak"
    title="42"}least because many mass media companies, especially those in
    Germany related to the Springer publishing house, were openly inimical
    to these social movements. Critical theor­ies arose that, borrowing
    Louis Althusser\'s influential term, regarded the media as part of the
    "ideological state apparatus"; that is, as one of the authorities whose
    task is to influence people to accept social relations to such a degree
    that the "repressive state apparatuses" (the police, the military, etc.)
    form a constant background in everyday
    life.[^66^](#c1-note-0066){#c1-note-0066a} Similarly influential,
    Antonio Gramsci\'s theory of "cultural hegemony" emphasized the
    condition in which the governed are manipulated to form a cultural
    consensus with the ruling class; they accept the latter\'s
    presuppositions (and the politics which are thus justified) even though,
    by doing so, they are forced to suffer economic
    disadvantages.[^67^](#c1-note-0067){#c1-note-0067a} Guy Debord and the
    Situationists attributed to the media a central role in the new form of
    rule known as "the spectacle," the glittery surfaces and superficial
    manifestations of which served to conceal society\'s true
    relations.[^68^](#c1-note-0068){#c1-note-0068a} In doing so, they
    aligned themselves with the critique of the "culture industry," which
    had been formulated by Max Horkheimer and Theodor W. Adorno at the
    beginning of the 1940s and had become a widely discussed key text by the
    1960s.

    Their differences aside, these perspectives were united in that they no
    longer understood the "public" as a neutral sphere, in which citizens
    could inform themselves freely and form their opinions, but rather as
    something that was created with specific intentions and consequences.
    From this grew an interest in "counter-publics"; that is, in forums
    where other actors could appear and negotiate theories of their own. The
    mass media thus became an important instrument for organizing the
    bourgeois--capitalist public, but they were also responsible for the
    development of alternatives. Media, according to one of the core ideas
    of these new approaches, are less a sphere in which an external reality
    is depicted; rather, they are themselves a constitutive element of
    reality.
    :::

    ::: {.section}
    ### Media as lifeworlds {#c1-sec-0011}

    Another branch of new media theories, that of Marshall McLuhan and the
    Toronto School of Communication,[^69^](#c1-note-0069){#c1-note-0069a}
    []{#Page_43 type="pagebreak" title="43"}reached a similar conclusion on
    different grounds. In 1964, McLuhan aroused a great deal of attention
    with his slogan "the medium is the message." He maintained that every
    medium of communication, by means of its media-specific characteristics,
    directly affected the consciousness, self-perception, and worldview of
    every individual.[^70^](#c1-note-0070){#c1-note-0070a} This, he
    believed, happens independently of and in addition to whatever specific
    message a medium might be conveying. From this perspective, reality does
    not exist outside of media, given that media codetermine our personal
    relation to and behavior in the world. For McLuhan and the Toronto
    School, media were thus not channels for transporting content but rather
    the all-encompassing environments -- galaxies -- in which we live.

    Such ideas were circulating much earlier and were intensively developed
    by artists, many of whom were beginning to experiment with new
    electronic media. An important starting point in this regard was the
    1963 exhibit *Exposition of Music -- Electronic Television* by the
    Korean artist Nam June Paik, who was then collaborating with Karlheinz
    Stockhausen in Düsseldorf. Among other things, Paik presented 12
    television sets, the screens of which were "distorted" by magnets. Here,
    however, "distorted" is a problematic term, for, as Paik explicitly
    noted, the electronic images were "a beautiful slap in the face of
    classic dualism in philosophy since the time of Plato. \[...\] Essence
    AND existence, essentia AND existentia. In the case of the electron,
    however, EXISTENTIA IS ESSENTIA."[^71^](#c1-note-0071){#c1-note-0071a}
    Paik no longer understood the electronic image on the television screen
    as a portrayal or representation of anything. Rather, it engendered in
    the moment of its appearance an autonomous reality beyond and
    independent of its representational function. A whole generation of
    artists began to explore forms of existence in electronic media, which
    they no longer understood as pure media of information. In his work
    *Video Corridor* (1969--70), Bruce Nauman stacked two monitors at the
    end of a corridor that was approximately 10 meters long but only 50
    centimeters wide. On the lower monitor ran a video showing the empty
    hallway. The upper monitor displayed an image captured by a camera
    installed at the entrance of the hall, about 3 meters high. If the
    viewer moved down the corridor toward the two []{#Page_44
    type="pagebreak" title="44"}monitors, he or she would thus be recorded
    by the latter camera. Yet the closer one came to the monitor, the
    farther one would be from the camera, so that one\'s image on the
    monitor would become smaller and smaller. Recorded from behind, viewers
    would thus watch themselves walking away from themselves. Surveillance
    by others, self-surveillance, recording, and disappearance were directly
    and intuitively connected with one another and thematized as fundamental
    issues of electronic media.

    Toward the end of the 1960s, the easier availability and mobility of
    analog electronic production technologies promoted the search for
    counter-publics and the exploration of media as comprehensive
    lifeworlds. In 1967, Sony introduced its first Portapak system: a
    battery-powered, self-contained recording system -- consisting of a
    camera, a cord, and a recorder -- with which it was possible to make
    (black-and-white) video recordings outside of a studio. Although the
    recording apparatus, which required additional devices for editing and
    projection, was offered at the relatively expensive price of \$1,500
    (which corresponds to about €8,000 today), it was still affordable for
    interested groups. Compared with the situation of traditional film
    cameras, these new cameras considerably lowered the initial hurdle for
    media production, for video tapes were not only much cheaper than film
    reels (and could be used for multiple recordings); they also made it
    possible to view recorded material immediately and on location. This
    enabled the production of works that were far more intuitive and
    spontaneous than earlier ones. The 1970s saw the formation of many video
    groups, media workshops, and other initiatives for the independent
    production of electronic media. Through their own distribution,
    festivals, and other channels, such groups created alternative public
    spheres. The latter became especially prominent in the United States
    where, at the end of the 1960s, the providers of cable networks were
    legally obligated to establish public-access channels, on which citizens
    were able to operate self-organized and non-commercial television
    programs. This gave rise to a considerable public-access movement there,
    which at one point extended across 4,000 cities and was responsible for
    producing programs from and for these different
    communities.[^72[]{#Page_45 type="pagebreak"
    title="45"}^](#c1-note-0072){#c1-note-0072a}

    What these initiatives shared in common, in Western Europe and the
    United States, was their attempt to close the gap between the
    consumption and production of media, to activate the public, and at
    least in part to experiment with the media themselves. Non-professional
    producers were empowered with the ability to control who told their
    stories and how this happened. Groups that previously had no access to
    the medial public sphere now had opportunities to represent themselves
    and their own interests. By working together on their own productions,
    such groups demystified the medium of television and simultaneously
    equipped it with a critical consciousness.

    Especially well received in Germany was the work of Hans Magnus
    Enzensberger, who in 1970 argued (on the basis of Bertolt Brecht\'s
    radio theory) in favor of distinguishing between "repressive" and
    "emancipatory" uses of media. For him, the emancipatory potential of
    media lay in the fact that "every receiver is \[...\] a potential
    transmitter" that can participate "interactively" in "collective
    production."[^73^](#c1-note-0073){#c1-note-0073a} In the same year, the
    first German video group, Telewissen, debuted in public with a
    demonstration in downtown Darmstadt. In 1980, at the peak of the
    movement for independent video production, there were approximately a
    hundred such groups throughout (West) Germany. The lack of distribution
    channels, however, represented a nearly insuperable obstacle and ensured
    that many independent productions were seldom viewed outside of
    small-scale settings. Tapes had to be exchanged between groups through
    the mail, and they were mainly shown at gatherings and events, and in
    bars. The dynamic of alternative media shifted toward a small subculture
    (though one networked throughout all of Europe) of pirate radio and
    television broadcasters. At the beginning of the 1980s and in the space
    of Radio Dreyeckland in Freiburg, which had been founded in 1977 as
    Radio Verte Fessenheim, operations began at Germany\'s first pirate or
    citizens\' radio station, which regularly broadcast information about
    the political protest movements that had arisen against the use of
    nuclear power in Fessenheim (France), Wyhl (Germany), and Kaiseraugst
    (Switzerland). The epicenter of the scene, however, was located in
    Amsterdam, where the group known as Rabotnik TV, which was an offshoot
    []{#Page_46 type="pagebreak" title="46"}of the squatter scene there,
    would illegally feed its signal through official television stations
    after their programming had ended at night (many stations then stopped
    broadcasting at midnight). In 1988, the group acquired legal
    broadcasting slots on the cable network and reached up to 50,000 viewers
    with their weekly experimental shows, which largely consisted of footage
    appropriated freely from elsewhere.[^74^](#c1-note-0074){#c1-note-0074a}
    Early in 1990, the pirate television station Kanal X was created in
    Leipzig; it produced its own citizens\' television programming in the
    quasi-lawless milieu of the GDR before
    reunification.[^75^](#c1-note-0075){#c1-note-0075a}

    These illegal, independent, or public-access stations only managed to
    establish themselves as real mass media to a very limited extent.
    Nevertheless, they played an important role in sensitizing an entire
    generation of media activists, whose opportunities expanded as the means
    of production became both better and cheaper. In the name of "tactical
    media," a new generation of artistic and political media activists came
    together in the middle of the
    1990s.[^76^](#c1-note-0076){#c1-note-0076a} They combined the "camcorder
    revolution," which in the late 1980s had made video equipment available
    to broader swaths of society, stirring visions of democratic media
    production, with the newly arrived medium of the internet. Despite still
    struggling with numerous technical difficulties, they remained constant
    in their belief that the internet would solve the hitherto intractable
    problem of distributing content. The transition from analog to digital
    media lowered the production hurdle yet again, not least through the
    ongoing development of improved software. Now, many stages of production
    that had previously required professional or semi-professional expertise
    and equipment could also be carried out by engaged laymen. As a
    consequence, the focus of interest broadened to include not only the
    development of alternative production groups but also the possibility of
    a flexible means of rapid intervention in existing structures. Media --
    both television and the internet -- were understood as environments in
    which one could act without directly representing a reality outside of
    the media. Television was analyzed down to its own legalities, which
    could then be manipulated to affect things beyond the media.
    Increasingly, culture jamming and the campaigns of so-called
    communication guerrillas were blurring the difference between media and
    political activity.[^77[]{#Page_47 type="pagebreak"
    title="47"}^](#c1-note-0077){#c1-note-0077a}

    This difference was dissolved entirely by a new generation of
    politically motivated artists, activists, and hackers, who transferred
    the tactics of civil disobedience -- blockading a building with a
    sit-in, for instance -- to the
    internet.[^78^](#c1-note-0078){#c1-note-0078a} When, in 1994, the
    Zapatista Army of National Liberation rose up in the south of Mexico,
    several media projects were created to support its mostly peaceful
    opposition and to make the movement known in Europe and North America.
    As part of this loose network, in 1998 the American artist collective
    Electronic Disturbance Theater developed a relatively simple computer
    program called FloodNet that enabled networked sympathizers to shut down
    websites, such as those of the Mexican government, in a targeted and
    temporary manner. The principle was easy enough: the program would
    automatic­ally reload a certain website over and over again in order to
    exhaust the capacities of its network
    servers.[^79^](#c1-note-0079){#c1-note-0079a} The goal was not to
    destroy data but rather to disturb the normal functioning of an
    institution in order to draw attention to the activities and interests
    of the protesters.
    :::

    ::: {.section}
    ### Networks as places of action {#c1-sec-0012}

    What this new generation of media activists shared in common with the
    hackers and pioneers of computer networks was the idea that
    communication media are spaces for agency. During the 1960s, these
    programmers were also in search of alternatives. The difference during
    the 1960s is that they did not pursue these alternatives in
    counter-publics, but rather in alternative lifestyles and communication.
    The rejection of bureaucracy as a form of social organization played a
    significant role in the critique of industrial society formulated by
    freedom-oriented social movements. At the beginning of the previous
    century, Max Weber had still regarded bureaucracy as a clear sign of
    progress toward a rational and method­ical
    organization.[^80^](#c1-note-0080){#c1-note-0080a} He based this
    assessment on processes that were impersonal, rule-bound, and
    transparent (in the sense that they were documented with files). But
    now, in the 1960s, bureaucracy was being criticized as soulless,
    alienated, oppressive, non-transparent, and unfit for an increasingly
    complex society. Whereas the first four of these points are in basic
    agreement with Weber\'s thesis about "disenchanting" []{#Page_48
    type="pagebreak" title="48"}the world, the last point represents a
    radical departure from his analysis. Bureaucracies were no longer
    regarded as hyper-efficient but rather as inefficient, and their size
    and rule-bound nature were no longer seen as strengths but rather as
    decisive weaknesses. The social bargain of offering prosperity and
    security in exchange for subordination to hierarchical relations struck
    many as being anything but attractive, and what blossomed instead was a
    broad interest in alternative forms of coexistence. New institutions
    were expected to be more flexible and more open. The desire to step away
    from the system was widespread, and many (mostly young) people set about
    doing exactly that. Alternative ways of life -- communes, shared
    apartments, and cooperatives -- were explored in the country and in
    cities. They were meant to provide the individual with greater autonomy
    and the opportunity to develop his or her own unique potential. Despite
    all of the differences between these concepts of life, they nevertheless
    shared something of a common denominator: the promise of
    reconceptualizing social institutions and the fundamentals of
    coexistence, with the aim of reformulating them in such a way as to
    allow everyone\'s personal potential to develop fully in the here and
    now.

    According to critics of such alternatives, bureaucracy was necessary in
    order to organize social life as it radically reduced the world\'s
    complexity by forcing it through the bottleneck of official procedures.
    However, the price paid for such efficiency involved the atrophying of
    human relationships, which had to be subordinated to rigid processes
    that were incapable of registering unique characteristics and
    differences and were unable to react in a timely manner to changing
    circumstances.

    In the 1960s, many countercultural attempts to find new forms of
    organization placed personal and open communication at the center of
    their efforts. Each individual was understood as a singular person with
    untapped potential rather than a carrier of abstract and clearly defined
    functions. It was soon realized, however, that every common activity and
    every common decision entailed processes that were time-intensive and
    communication-intensive. As soon as a group exceeded a certain size, it
    became practically impossible for it to reach any consensus. As a result
    of these experiences, an entire worldview emerged that propagated
    "smallness" as a central []{#Page_49 type="pagebreak" title="49"}value
    ("small is beautiful"). It was thought that in this way society might
    escape from bureaucracy with its ostensibly disastrous consequences for
    humanity and the environment.[^81^](#c1-note-0081){#c1-note-0081a} But
    this belief did not last for long. For, unlike the majority of European
    alternative movements, the counterculture in the United States was not
    overwhelmingly critical of technology. On the contrary, many actors
    there sought suitable technologies for solving the practical problems of
    social organization. At the end of the 1960s, a considerable amount of
    attention was devoted to the field of basic technological research. This
    field brought together the interests of the military, academics,
    businesses, and activists from the counterculture. The common ground for
    all of them was a cybernetic vision of institutions, or, in the words of
    the historian Fred Turner:

    ::: {.extract}
    a picture of humans and machines as dynamic, collaborating elements in a
    single, highly fluid, socio-technical system. Within that system,
    control emerged not from the mind of a commanding officer, but from the
    complex, probabilistic interactions of humans, machines and events
    around them. Moreover, the mechanical elements of the system in question
    -- in this case, the predictor -- enabled the human elements to achieve
    what all Americans would agree was a worthwhile goal. \[...\] Over the
    coming decades, this second vision of benevolent man-machine systems, of
    circular flows of information, would emerge as a driving force in the
    establishment of the military--industrial--academic complex and as a
    model of an alternative to that
    complex.[^82^](#c1-note-0082){#c1-note-0082a}
    :::

    This complex was possible because, as a theory, cybernetics was
    formulated in extraordinarily abstract terms, so much so that a whole
    variety of competing visions could be associated with
    it.[^83^](#c1-note-0083){#c1-note-0083a} With cybernetics as a
    meta-science, it was possible to investigate the common features of
    technical, social, and biological
    processes.[^84^](#c1-note-0084){#c1-note-0084a} They were analyzed as
    open, interactive, and information-processing systems. It was especially
    consequential that cybernetics defined control and communication as the
    same thing, namely as activities oriented toward informational
    feedback.[^85^](#c1-note-0085){#c1-note-0085a} The heterogeneous legacy
    of cybernetics and its synonymous treatment of the terms "communication"
    and "control" continue to influence information technology and the
    internet today.[]{#Page_50 type="pagebreak" title="50"}

    The various actors who contributed to the development of the internet
    shared a common interest for forms of organ­ization based on the
    comprehensive, dynamic, and open exchange of information. Both on the
    micro and macro level (and this is decisive at this point),
    decentralized and flexible communication technologies were meant to
    become the foundation of new organizational models. Militaries feared
    attacks on their command and communication centers; academics wanted to
    broaden their culture of autonomy, collaboration among peers, and the
    free exchange of information; businesses were looking for new areas of
    activity; and countercultural activists were longing for new forms of
    peaceful coexistence.[^86^](#c1-note-0086){#c1-note-0086a} They all
    rejected the bureaucratic model, and the counterculture provided them
    with the central catchword for their alternative vision: community.
    Though rather difficult to define, it was a powerful and positive term
    that somehow promised the opposite of bureaucracy: humanity,
    cooperation, horizontality, mutual trust, and consensus. Now, however,
    humanity was expected to be reconfigured as a community in cooperation
    with and inseparable from machines. And what was yearned for had become
    a liberating symbiosis of man and machine, an idea that the author
    Richard Brautigan was quick to mock in his poem "All Watched Over by
    Machines of Loving Grace" from 1967:

    ::: {.poem}
    ::: {.lineGroup}
    I like to think (and

    the sooner the better!)

    of a cybernetic meadow

    where mammals and computers

    live together in mutually

    programming harmony

    like pure water

    touching clear sky.[^87^](#c1-note-0087){#c1-note-0087a}
    :::
    :::

    Here, Brautigan is ridiculing both the impatience (*the sooner the
    better!*) and the naïve optimism (*harmony, clear sky*) of the
    countercultural activists. Primarily, he regarded the underlying vision
    as an innocent but amusing fantasy and not as a potential threat against
    which something had to be done. And there were also reasons to believe
    that, ultimately, the new communities would be free from the coercive
    nature that []{#Page_51 type="pagebreak" title="51"}had traditionally
    characterized the downside of community experiences. It was thought that
    the autonomy and freedom of the individual could be regained in and by
    means of the community. The conditions for this were that participation
    in the community had to be voluntary and that the rules of participation
    had to be self-imposed. I will return to this topic in greater detail
    below.

    In line with their solution-oriented engineering culture and the
    results-focused military funders who by and large set the agenda, a
    relatively small group of computer scientists now took it upon
    themselves to establish the technological foundations for new
    institutions. This was not an abstract goal for the distant future;
    rather, they wanted to change everyday practices as soon as possible. It
    was around this time that advanced technology became the basis of social
    communication, which now adopted forms that would have been
    inconceivable (not to mention impracticable) without these
    preconditions. Of course, effective communication technologies already
    existed at the time. Large corporations had begun long before then to
    operate their own computing centers. In contrast to the latter, however,
    the new infrastructure could also be used by individuals outside of
    established institutions and could be implemented for all forms of
    communication and exchange. This idea gave rise to a pragmatic culture
    of horizontal, voluntary cooperation. The clearest summary of this early
    ethos -- which originated at the unusual intersection of military,
    academic, and countercultural interests -- was offered by David D.
    Clark, a computer scientist who for some time coordinated the
    development of technical standards for the internet: "We reject: kings,
    presidents and voting. We believe in: rough consensus and running
    code."[^88^](#c1-note-0088){#c1-note-0088a}

    All forms of classical, formal hierarchies and their methods for
    resolving conflicts -- commands (by kings and presidents) and votes --
    were dismissed. Implemented in their place was a pragmatics of open
    cooperation that was oriented around two guiding principles. The first
    was that different views should be discussed without a single individual
    being able to block any final decisions. Such was the meaning of the
    expression "rough consensus." The second was that, in accordance with
    the classical engineering tradition, the focus should remain on concrete
    solutions that had to be measured against one []{#Page_52
    type="pagebreak" title="52"}another on the basis of transparent
    criteria. Such was the meaning of the expression "running code." In
    large part, this method was possible because the group oriented around
    these principles was, internally, relatively homogeneous: it consisted
    of top-notch computer scientists -- all of them men -- at respected
    American universities and research centers. For this very reason, many
    potential and fundamental conflicts were avoided, at least at first.
    This internal homogeneity lends rather dark undertones to their sunny
    vision, but this was hardly recognized at the time. Today these
    undertones are far more apparent, and I will return to them below.

    Not only were technical protocols developed on the basis of these
    principles, but organizational forms as well. Along with the Internet
    Engineering Task Force (which he directed), Clark created the so-called
    Request-for-Comments documents, with which ideas could be presented to
    interested members of the community and simultaneous feedback could be
    collected in order to work through the ideas in question and thus reach
    a rough consensus. If such a consensus could not be reached -- if, for
    instance, an idea failed to resonate with anyone or was too
    controversial -- then the matter would be dropped. The feedback was
    organized as a form of many-to-many communication through email lists,
    newsgroups, and online chat systems. This proved to be so effective that
    horizontal communication within large groups or between multiple groups
    could take place without resulting in chaos. This therefore invalidated
    the traditional trend that social units, once they reach a certain size,
    would necessarily introduce hierarchical structures for the sake of
    reducing complexity and communication. In other words, the foundations
    were laid for larger numbers of (changing) people to organize flexibly
    and with the aim of building an open consensus. For Manuel Castells,
    this combination of organizational flexibility and scalability in size
    is the decisive innovation that was enabled by the rise of the network
    society.[^89^](#c1-note-0089){#c1-note-0089a} At the same time, however,
    this meant that forms of organization spread that could only be possible
    on the basis of technologies that have formed (and continue to form)
    part of the infrastructure of the internet. Digital technology and the
    social activity of individual users were linked together to an
    unprecedented extent. Social and cultural agendas were now directly
    related []{#Page_53 type="pagebreak" title="53"}to and entangled with
    technical design. Each of the four original interest groups -- the
    military, scientists, businesses, and the counterculture -- implemented
    new technologies to pursue their own projects, which partly complemented
    and partly contradicted one another. As we know today, the first three
    groups still cooperate closely with each other. To a great extent, this
    has allowed the military and corporations, which are willingly supported
    by researchers in need of funding, to determine the technology and thus
    aspects of the social and cultural agendas that depend on it.

    The software developers\' immediate environment experienced its first
    major change in the late 1970s. Software, which for many had been a mere
    supplement to more expensive and highly specialized hardware, became a
    marketable good with stringent licensing restrictions. A new generation
    of businesses, led by Bill Gates, suddenly began to label co­operation
    among programmers as theft.[^90^](#c1-note-0090){#c1-note-0090a}
    Previously it had been par for the course, and above all necessary, for
    programmers to share software with one another. The former culture of
    horizontal cooperation between developers transformed into a
    hierarchical and commercially oriented relation between developers and
    users (many of whom, at least at the beginning, had developed programs
    of their own). For the first time, copyright came to play an important
    role in digital culture. In order to survive in this environment, the
    practice of open cooperation had to be placed on a new legal foundation.
    Copyright law, which served to separate programmers (producers) from
    users (consumers), had to be neutralized or circumvented. The first step
    in this direction was taken in 1984 by the activist and programmer
    Richard Stallman. Composed by Stallman, the GNU General Public License
    was and remains a brilliant hack that uses the letter of copyright law
    against its own spirit. This happens in the form of a license that
    defines "four freedoms":

    1. The freedom to run the program as you wish, for any purpose (freedom
    0).
    2. The freedom to study how the program works and change it so it does
    your computing as you wish (freedom 1).
    3. The freedom to redistribute copies so you can help your neighbor
    (freedom 2).[]{#Page_54 type="pagebreak" title="54"}
    4. The freedom to distribute copies of your modified versions to others
    (freedom 3). By doing this you can give the whole community a chance
    to benefit from your changes.[^91^](#c1-note-0091){#c1-note-0091a}

    Thanks to this license, people who were personally unacquainted and did
    not share a common social environment could now cooperate (freedoms 2
    and 3) and simultaneously remain autonomous and unrestricted (freedoms 0
    and 1). For many, the tension between the need to develop complex
    software in large teams and the desire to maintain one\'s own autonomy
    represented an incentive to try out new forms of
    cooperation.[^92^](#c1-note-0092){#c1-note-0092a}

    Stallman\'s influence was at first limited to a small circle of
    programmers. In the middle of the 1980s, the goal of developing a
    completely free operating system seemed a distant one. Communication
    between those interested in doing so was often slow and complicated. In
    part, program codes still had to be sent by mail. It was not until the
    beginning of the 1990s that students in technical departments at many
    universities could access the
    internet.[^93^](#c1-note-0093){#c1-note-0093a} One of the first to use
    these new opportunities in an innovative way was a Finnish student named
    Linus Torvalds. He built upon Stallman\'s work and programmed a kernel,
    which, as the most important module of an operating system, governs the
    interaction between hardware and software. He published the first free
    version of this in 1991 and encouraged anyone interested to give him
    feedback.[^94^](#c1-note-0094){#c1-note-0094a} And it poured in.
    Torvalds reacted promptly and issued new versions of his software in
    quick succession. Instead of understanding his software as a finished
    product, he treated it like an open-ended process. This, in turn,
    motiv­ated even more developers to participate, because they saw that
    their contributions were being adopted swiftly, which led to the
    formation of an open community of interested programmers who swapped
    ideas over the internet and continued writing software. In order to
    maintain an overview of the different versions of the program, which
    appeared in parallel with one another, it soon became necessary to
    employ specialized platforms. The fusion of social processes --
    horizontal and voluntary cooperation among developers -- and
    technological platforms, which enabled this form of cooperation
    []{#Page_55 type="pagebreak" title="55"}by providing archives, filter
    functions, and search capabil­ities that made it possible to organize
    large amounts of data, was thus advanced even further. The programmers
    were no longer primarily working on the development of the internet
    itself, which by then was functioning quite reliably, but were rather
    using the internet to apply their cooperative principles to other
    arenas. By the end of the 1990s, the free-software movement had
    established a new, internet-based form of organization and had
    demonstrated its efficiency in practice: horizontal, informal
    communities of actors -- voluntary, autonomous, and focused on a common
    interest -- that, on the basis of high-tech infrastructure, could
    include thousands of people without having to create formal hierarchies.
    :::
    :::

    ::: {.section}
    From the Margins to the Center of Society {#c1-sec-0013}
    -----------------------------------------

    It was around this same time that the technologies in question, which
    were already no longer very new, entered mainstream society. Within a
    few years, the internet became part of everyday life. Three years before
    the turn of the millennium, only about 6 percent of the entire German
    population used the internet, often only occasionally. Three years after
    the millennium, the number of users already exceeded 53 percent. Since
    then, this share has increased even further. In 2014, it was more than
    97 percent for people under the age of
    40.[^95^](#c1-note-0095){#c1-note-0095a} Parallel to these developments,
    data transfer rates increased considerably, broadband connections ousted
    the need for dial-up modems, and the internet was suddenly "here" and no
    longer "there." With the spread of mobile devices, especially since the
    year 2007 when the first iPhone was introduced, digital communication
    became available both extensively and continuously. Since then, the
    internet has been ubiquitous. The amount of time that users spend online
    has increased and, with the rapid ascent of social mass media such as
    Facebook, people have been online in almost every situation and
    circumstance in life.[^96^](#c1-note-0096){#c1-note-0096a} The internet,
    like water or electricity, has become for many people a utility that is
    simply taken for granted.

    In a BBC survey from 2010, 80 percent of those polled believed that
    internet access -- a precondition for participating []{#Page_56
    type="pagebreak" title="56"}in the now dominant digital condition --
    should be regarded as a fundamental human right. This idea was most
    popular in South Korea (96 percent) and Mexico (94 percent), while in
    Germany at least 72 percent were of the same
    opinion.[^97^](#c1-note-0097){#c1-note-0097a}

    On the basis of this new infrastructure, which is now relevant in all
    areas of life, the cultural developments described above have been
    severed from the specific historical conditions from which they emerged
    and have permeated society as a whole. Expressivity -- the ability to
    communicate something "unique" -- is no longer a trait of artists and
    know­ledge workers alone, but rather something that is required by an
    increasingly broader stratum of society and is already being taught in
    schools. Users of social mass media must produce (themselves). The
    development of specific, differentiated identities and the demand that
    each be treated equally are no longer promoted exclusively by groups who
    have to struggle against repression, existential threats, and
    marginalization, but have penetrated deeply into the former mainstream,
    not least because the present forms of capitalism have learned to profit
    from the spread of niches and segmentation. When even conservative
    parties have abandoned the idea of a "leading culture," then cultural
    differences can no longer be classified by enforcing an absolute and
    indisputable hierarchy, the top of which is occupied by specific
    (geographical and cultural) centers. Rather, a space has been opened up
    for endless negotiations, a space in which -- at least in principle --
    everything can be called into question. This is not, of course, a
    peaceful and egalitarian process. In addition to the practical hurdles
    that exist in polarizing societies, there are also violent backlashes
    and new forms of fundamentalism that are attempting once again to remove
    certain religious, social, cultural, or political dimensions of
    existence from the discussion. Yet these can only be understood in light
    of a sweeping cultural transformation that has already reached
    mainstream society.[^98^](#c1-note-0098){#c1-note-0098a} In other words,
    the digital condition has become quotidian and dominant. It forms a
    cultural constellation that determines all areas of life, and its
    characteristic features are clearly recognizable. These will be the
    focus of the next chapter.[]{#Page_57 type="pagebreak" title="57"}
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#c1-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#c1-note-0001a){#c1-note-0001}  Kathrin Passig and Sascha Lobo,
    *Internet: Segen oder Fluch* (Berlin: Rowohlt, 2012) \[--trans.\].

    [2](#c1-note-0002a){#c1-note-0002}  The expression "heteronormatively
    behaving" is used here to mean that, while in the public eye, the
    behavior of the people []{#Page_177 type="pagebreak" title="177"}in
    question conformed to heterosexual norms regardless of their personal
    sexual orientations.

    [3](#c1-note-0003a){#c1-note-0003}  No order is ever entirely closed
    off. In this case, too, there was also room for exceptions and for
    collective moments of greater cultural multiplicity. That said, the
    social openness of the end of the 1920s, for instance, was restricted to
    particular milieus within large cities and was accordingly short-lived.

    [4](#c1-note-0004a){#c1-note-0004}  Fritz Machlup, *The Political
    Economy of Monopoly: Business, Labor and Government Policies*
    (Baltimore, MD: The Johns Hopkins University Press, 1952).

    [5](#c1-note-0005a){#c1-note-0005}  Machlup was a student of Ludwig von
    Mises, the most influential representative of this radically
    individualist school. See Hans-Hermann Hoppe, "Die Österreichische
    Schule und ihre Bedeutung für die moderne Wirtschaftswissenschaft," in
    Karl-Dieter Grüske (ed.), *Die Gemeinwirtschaft: Kommentarband zur
    Neuauflage von Ludwig von Mises' "Die Gemeinwirtschaft"* (Düsseldorf:
    Verlag Wirtschaft und Finanzen, 1996), pp. 65--90.

    [6](#c1-note-0006a){#c1-note-0006}  Fritz Machlup, *The Production and
    Distribution of Knowledge in the United States* (New York: John Wiley &
    Sons, 1962).

    [7](#c1-note-0007a){#c1-note-0007}  The term "knowledge worker" had
    already been introduced to the discussion a few years before; see Peter
    Drucker, *Landmarks of Tomorrow: A Report on the New* (New York: Harper,
    1959).

    [8](#c1-note-0008a){#c1-note-0008}  Peter Ecker, "Die
    Verwissenschaftlichung der Industrie: Zur Geschichte der
    Industrieforschung in den europäischen und amerikanischen
    Elektrokonzernen 1890--1930," *Zeitschrift für Unternehmensgeschichte*
    35 (1990): 73--94.

    [9](#c1-note-0009a){#c1-note-0009}  Edward Bernays was the son of
    Sigmund Freud\'s sister Anna and Ely Bernays, the brother of Freud\'s
    wife, Martha Bernays.

    [10](#c1-note-0010a){#c1-note-0010}  Edward L. Bernays, *Propaganda*
    (New York: Horace Liverlight, 1928).

    [11](#c1-note-0011a){#c1-note-0011}  James Beniger, *The Control
    Revolution: Technological and Economic Origins of the Information
    Society* (Cambridge, MA: Harvard University Press, 1986), p. 350.

    [12](#c1-note-0012a){#c1-note-0012}  Norbert Wiener, *Cybernetics: Or
    Control and Communication in the Animal and the Machine* (New York: J.
    Wiley, 1948).

    [13](#c1-note-0013a){#c1-note-0013}  Daniel Bell, *The Coming of
    Post-Industrial Society: A Venture in Social Forecasting* (New York:
    Basic Books, 1973).

    [14](#c1-note-0014a){#c1-note-0014}  Simon Nora and Alain Minc, *The
    Computerization of Society: A Report to the President of France*
    (Cambridge, MA: MIT Press, 1980).

    [15](#c1-note-0015a){#c1-note-0015}  Manuel Castells, *The Rise of the
    Network Society* (Oxford: Blackwell, 1996).

    [16](#c1-note-0016a){#c1-note-0016}  Hans-Dieter Kübler, *Mythos
    Wissensgesellschaft: Gesellschaft­licher Wandel zwischen Information,
    Medien und Wissen -- Eine Einführung* (Wiesbaden: Verlag für
    Sozialwissenschaften, 2009).[]{#Page_178 type="pagebreak" title="178"}

    [17](#c1-note-0017a){#c1-note-0017}  Luc Boltanski and Ève Chiapello,
    *The New Spirit of Capitalism*, trans. Gregory Elliott (London: Verso,
    2005).

    [18](#c1-note-0018a){#c1-note-0018}  Michael Piore and Charles Sabel,
    *The Second Industrial Divide: Possibilities of Prosperity* (New York:
    Basic Books, 1984).

    [19](#c1-note-0019a){#c1-note-0019}  Castells, *The Rise of the Network
    Society*. For a critical evaluation of Castells\'s work, see Felix
    Stalder, *Manuel Castells and the Theory of the Network Society*
    (Cambridge: Polity, 2006).

    [20](#c1-note-0020a){#c1-note-0020}  "UK Creative Industries Mapping
    Documents" (1998); quoted from Terry Flew, *The Creative Industries:
    Culture and Policy* (Los Angeles, CA: Sage, 2012), pp. 9--10.

    [21](#c1-note-0021a){#c1-note-0021}  The rise of the creative
    industries, and the hope that they inspired among politicians, did not
    escape criticism. Among the first works to draw attention to the
    precarious nature of working in such industries was Angela McRobbie\'s
    *British Fashion Design: Rag Trade or Image Industry?* (New York:
    Routledge, 1998).

    [22](#c1-note-0022a){#c1-note-0022}  This definition is not without a
    degree of tautology, given that economic growth is based on talent,
    which itself is defined by its ability to create new jobs; that is,
    economic growth. At the same time, he employs the term "talent" in an
    extremely narrow sense. Apparently, if something has nothing to do with
    job creation, it also has nothing to do with talent or creativity. All
    forms of creativity are thus measured and compared according to a common
    criterion.

    [23](#c1-note-0023a){#c1-note-0023}  Richard Florida, *Cities and the
    Creative Class* (New York: Routledge, 2005), p. 5.

    [24](#c1-note-0024a){#c1-note-0024}  One study has reached the
    conclusion that, despite mass participation, "a new form of
    communicative elite has developed, namely digitally and technically
    versed actors who inform themselves in this way, exchange ideas and thus
    gain influence. For them, the possibilities of platforms mainly
    represent an expansion of useful tools. Above all, the dissemination of
    digital technology makes it easier for versed and highly networked
    individuals to convey their news more simply -- and, for these groups of
    people, it lowers the threshold for active participation." Michael
    Bauer, "Digitale Technologien und Partizipation," in Clara Landler et
    al. (eds), *Netzpolitik in Österreich: Internet, Macht, Menschenrechte*
    (Krems: Donau-Universität Krems, 2013), pp. 219--24, at 224
    \[--trans.\].

    [25](#c1-note-0025a){#c1-note-0025}  Boltanski and Chiapello, *The New
    Spirit of Capitalism*.

    [26](#c1-note-0026a){#c1-note-0026}  According to Wikipedia,
    "Heteronormativity is the belief that people fall into distinct and
    complementary genders (man and woman) with natural roles in life. It
    assumes that heterosexuality is the only sexual orientation or only
    norm, and states that sexual and marital relations are most (or only)
    fitting between people of opposite sexes."[]{#Page_179 type="pagebreak"
    title="179"}

    [27](#c1-note-0027a){#c1-note-0027}  Jannis Plastargias, *RotZSchwul:
    Der Beginn einer Bewegung (1971--1975)* (Berlin: Querverlag, 2015).

    [28](#c1-note-0028a){#c1-note-0028}  Helmut Ahrens et al. (eds),
    *Tuntenstreit: Theoriediskussion der Homosexuellen Aktion Westberlin*
    (Berlin: Rosa Winkel, 1975), p. 4.

    [29](#c1-note-0029a){#c1-note-0029}  Susanne Regener and Katrin Köppert
    (eds), *Privat/öffentlich: Mediale Selbstentwürfe von Homosexualität*
    (Vienna: Turia + Kant, 2013).

    [30](#c1-note-0030a){#c1-note-0030}  Such, for instance, was the
    assessment of Manfred Bruns, the spokesperson for the Lesbian and Gay
    Association in Germany, in his text "Schwulenpolitik früher" (link no
    longer active). From today\'s perspective, however, the main problem
    with this event was the unclear position of the Green Party with respect
    to pedophilia. See Franz Walter et al. (eds), *Die Grünen und die
    Pädosexualität: Eine bundesdeutsche Geschichte* (Göttingen: Vandenhoeck
    & Ruprecht, 2014).

    [31](#c1-note-0031a){#c1-note-0031}  "AIDS: Tödliche Seuche," *Der
    Spiegel* 23 (1983) \[--trans.\].

    [32](#c1-note-0032a){#c1-note-0032}  Quoted from Frank Niggemeier, "Gay
    Pride: Schwules Selbst­bewußtsein aus dem Village," in Bernd Polster
    (ed.), *West-Wind: Die Amerikanisierung Europas* (Cologne: Dumont,
    1995), pp. 179--87, at 184 \[--trans.\].

    [33](#c1-note-0033a){#c1-note-0033}  Quoted from Regener and Köppert,
    *Privat/öffentlich*, p. 7 \[--trans.\].

    [34](#c1-note-0034a){#c1-note-0034}  Hans-Peter Buba and László A.
    Vaskovics, *Benachteiligung gleichgeschlechtlich orientierter Personen
    und Paare: Studie im Auftrag des Bundesministerium der Justiz* (Cologne:
    Bundes­anzeiger, 2001).

    [35](#c1-note-0035a){#c1-note-0035}  This process of internal
    differentiation has not yet reached its conclusion, and thus the
    acronyms have become longer and longer: LGBPTTQQIIAA+ stands for
    "lesbian, gay, bisexual, pansexual, transgender, transsexual, queer,
    questioning, intersex, intergender, asexual, ally."

    [36](#c1-note-0036a){#c1-note-0036}  Judith Butler, *Gender Trouble:
    Feminism and the Subversion of Identity* (New York: Routledge, 1989).

    [37](#c1-note-0037a){#c1-note-0037}  Andreas Krass, "Queer Studies: Eine
    Einführung," in Krass (ed.), *Queer denken: Gegen die Ordnung der
    Sexualität* (Frankfurt am Main: Suhrkamp, 2003), pp. 7--27.

    [38](#c1-note-0038a){#c1-note-0038}  Edward W. Said, *Orientalism* (New
    York: Vintage Books, 1978).

    [39](#c1-note-0039a){#c1-note-0039}  Kark August Wittfogel, *Oriental
    Despotism: A Comparative Study of Total Power* (New Haven, CT: Yale
    University Press, 1957).

    [40](#c1-note-0040a){#c1-note-0040}  Silke Förschler, *Bilder des Harem:
    Medienwandel und kultereller Austausch* (Berlin: Reimer, 2010).

    [41](#c1-note-0041a){#c1-note-0041}  The selection and effectiveness of
    these images is not a coincidence. Camel was one of the first brands of
    cigarettes for []{#Page_180 type="pagebreak" title="180"}which
    advertising, in the sense described above, was used in a systematic
    manner.

    [42](#c1-note-0042a){#c1-note-0042}  This would not exclude feelings of
    regret about the loss of an exotic and romantic way of life, such as
    those of T. E. Lawrence, whose activities in the Near East during the
    First World War were memorialized in the film *Lawrence of Arabia*
    (1962).

    [43](#c1-note-0043a){#c1-note-0043}  Said has often been criticized,
    however, for portraying orientalism so dominantly that there seems to be
    no way out of the existing dependent relations. For an overview of the
    debates that Said has instigated, see María do Mar Castro Varela and
    Nikita Dhawan, *Postkoloniale Theorie: Eine kritische Ein­führung*
    (Bielefeld: Transcript, 2005), pp. 37--46.

    [44](#c1-note-0044a){#c1-note-0044}  "Migration führt zu 'hybrider'
    Gesellschaft" (an interview with Homi K. Bhabha), *ORF Science*
    (November 9, 2007), online \[--trans.\].

    [45](#c1-note-0045a){#c1-note-0045}  Homi K. Bhabha, *The Location of
    Culture* (New York: Routledge, 1994), p. 4.

    [46](#c1-note-0046a){#c1-note-0046}  Elisabeth Bronfen and Benjamin
    Marius, "Hybride Kulturen: Einleitung zur anglo-amerikanischen
    Multikulturismusdebatte," in Bronfen et al. (eds), *Hybride Kulturen*
    (Tübingen: Stauffenburg), pp. 1--30, at 8 \[--trans.\].

    [47](#c1-note-0047a){#c1-note-0047}  "What Is Postcolonial Thinking? An
    Interview with Achille Mbembe," *Eurozine* (December 2006), online.

    [48](#c1-note-0048a){#c1-note-0048}  Migrants have always created their
    own culture, which deals in various ways with the experience of
    migration itself, but non-migrant populations have long tended to ignore
    this. Things have now begun to change in this regard, for instance
    through Imra Ayata and Bülent Kullukcu\'s compilation of songs by the
    Turkish diaspora of the 1970s and 1980s: *Songs of Gastarbeiter*
    (Munich: Trikont, 2013).

    [49](#c1-note-0049a){#c1-note-0049}  The conference programs can be
    found at: \<\>.

    [50](#c1-note-0050a){#c1-note-0050}  "Deutschland entwickelt sich zu
    einem attraktiven Einwanderungsland für hochqualifizierte Zuwanderer,"
    press release by the CDU/CSU Alliance in the German Parliament (June 4,
    2014), online \[--trans.\].

    [51](#c1-note-0051a){#c1-note-0051}  Andreas Reckwitz, *Die Erfindung
    der Kreativität: Zum Prozess gesellschaftlicher Ästhetisierung* (Berlin:
    Suhrkamp, 2011), p. 180 \[--trans.\]. An English translation of this
    book is forthcoming: *The Invention of Creativity: Modern Society and
    the Culture of the New*, trans. Steven Black (Cambridge: Polity, 2017).

    [52](#c1-note-0052a){#c1-note-0052}  Gert Selle, *Geschichte des Design
    in Deutschland* (Frankfurt am Main: Campus, 2007).

    [53](#c1-note-0053a){#c1-note-0053}  "Less Is More: The Design Ethos of
    Dieter Rams," *SFMOMA* (June 29, 2011), online.[]{#Page_181
    type="pagebreak" title="181"}

    [54](#c1-note-0054a){#c1-note-0054}  The cybernetic perspective was
    introduced to the field of design primarily by Buckminster Fuller. See
    Diedrich Diederichsen and Anselm Franke, *The Whole Earth: California
    and the Disappearance of the Outside* (Berlin: Sternberg, 2013).

    [55](#c1-note-0055a){#c1-note-0055}  Clive Dilnot, "Design as a Socially
    Significant Activity: An Introduction," *Design Studies* 3/3 (1982):
    139--46.

    [56](#c1-note-0056a){#c1-note-0056}  Victor J. Papanek, *Design for the
    Real World: Human Ecology and Social Change* (New York: Pantheon, 1972),
    p. 2.

    [57](#c1-note-0057a){#c1-note-0057}  Reckwitz, *Die Erfindung der
    Kreativität*.

    [58](#c1-note-0058a){#c1-note-0058}  B. Joseph Pine and James H.
    Gilmore, *The Experience Economy: Work Is Theater and Every Business Is
    a Stage* (Boston, MA: Harvard Business School Press, 1999), p. ix (the
    emphasis is original).

    [59](#c1-note-0059a){#c1-note-0059}  Mona El Khafif, *Inszenierter
    Urbanismus: Stadtraum für Kunst, Kultur und Konsum im Zeitalter der
    Erlebnisgesellschaft* (Saarbrücken: VDM Verlag Dr. Müller, 2013).

    [60](#c1-note-0060a){#c1-note-0060}  Konrad Becker and Martin Wassermair
    (eds), *Phantom Kulturstadt* (Vienna: Löcker, 2009).

    [61](#c1-note-0061a){#c1-note-0061}  See, for example, Andres Bosshard,
    *Stadt hören: Klang­spaziergänge durch Zürich* (Zurich: NZZ Libro,
    2009).

    [62](#c1-note-0062a){#c1-note-0062}  "An alternate realty game (ARG),"
    according to Wikipedia, "is an interactive networked narrative that uses
    the real world as a platform and employs transmedia storytelling to
    deliver a story that may be altered by players\' ideas or actions."

    [63](#c1-note-0063a){#c1-note-0063}  Eric von Hippel, *Democratizing
    Innovation* (Cambridge, MA: MIT Press, 2005).

    [64](#c1-note-0064a){#c1-note-0064}  It is often the case that the
    involvement of users simply serves to increase the efficiency of
    production processes and customer service. Many activities that were
    once undertaken at the expense of businesses now have to be carried out
    by the customers themselves. See Günter Voss, *Der arbeitende Kunde:
    Wenn Konsumenten zu unbezahlten Mitarbeitern werden* (Frankfurt am Main:
    Campus, 2005).

    [65](#c1-note-0065a){#c1-note-0065}  Beniger, *The Control Revolution*,
    pp. 411--16.

    [66](#c1-note-0066a){#c1-note-0066}  Louis Althusser, "Ideology and
    Ideological State Apparatuses (Notes towards an Investigation)," in
    Althusser, *Lenin and Philosophy and Other Essays*, trans. Ben Brewster
    (New York: Monthly Review Press, 1971), pp. 127--86.

    [67](#c1-note-0067a){#c1-note-0067}  Florian Becker et al. (eds),
    *Gramsci lesen! Einstiege in die Gefängnis­hefte* (Hamburg: Argument,
    2013), pp. 20--35.

    [68](#c1-note-0068a){#c1-note-0068}  Guy Debord, *The Society of the
    Spectacle*, trans. Fredy Perlman and Jon Supak (Detroit: Black & Red,
    1977).

    [69](#c1-note-0069a){#c1-note-0069}  Derrick de Kerckhove, "McLuhan and
    the Toronto School of Communication," *Canadian Journal of
    Communication* 14/4 (1989): 73--9.[]{#Page_182 type="pagebreak"
    title="182"}

    [70](#c1-note-0070a){#c1-note-0070}  Marshall McLuhan, *Understanding
    Media: The Extensions of Man* (New York: McGraw-Hill, 1964).

    [71](#c1-note-0071a){#c1-note-0071}  Nam Jun Paik, "Exposition of Music
    -- Electronic Television" (leaflet accompanying the exhibition). Quoted
    from Zhang Ga, "Sounds, Images, Perception and Electrons," *Douban*
    (March 3, 2016), online.

    [72](#c1-note-0072a){#c1-note-0072}  Laura R. Linder, *Public Access
    Television: America\'s Electronic Soapbox* (Westport, CT: Praeger,
    1999).

    [73](#c1-note-0073a){#c1-note-0073}  Hans Magnus Enzensberger,
    "Constituents of a Theory of the Media," in Noah Wardrip-Fruin and Nick
    Montfort (eds), *The New Media Reader* (Cambridge, MA: MIT Press, 2003),
    pp. 259--75.

    [74](#c1-note-0074a){#c1-note-0074}  Paul Groot, "Rabotnik TV,"
    *Mediamatic* 2/3 (1988), online.

    [75](#c1-note-0075a){#c1-note-0075}  Inke Arns, "Social Technologies:
    Deconstruction, Subversion and the Utopia of Democratic Communication,"
    *Medien Kunst Netz* (2004), online.

    [76](#c1-note-0076a){#c1-note-0076}  The term was coined at a series of
    conferences titled The Next Five Minutes (N5M), which were held in
    Amsterdam from 1993 to 2003. See \<\>.

    [77](#c1-note-0077a){#c1-note-0077}  Mark Dery, *Culture Jamming:
    Hacking, Slashing and Sniping in the Empire of Signs* (Westfield: Open
    Media, 1993); Luther Blisset et al., *Handbuch der
    Kommunikationsguerilla*, 5th edn (Berlin: Assoziationen A, 2012).

    [78](#c1-note-0078a){#c1-note-0078}  Critical Art Ensemble, *Electronic
    Civil Disobedience and Other Unpopular Ideas* (New York: Autonomedia,
    1996).

    [79](#c1-note-0079a){#c1-note-0079}  Today this method is known as a
    "distributed denial of service attack" (DDOS).

    [80](#c1-note-0080a){#c1-note-0080}  Max Weber, *Economy and Society: An
    Outline of Interpretive Sociology*, trans. Guenther Roth and Claus
    Wittich (Berkeley, CA: University of California Press, 1978), pp. 26--8.

    [81](#c1-note-0081a){#c1-note-0081}  Ernst Friedrich Schumacher, *Small
    Is Beautiful: Economics as if People Mattered*, 8th edn (New York:
    Harper Perennial, 2014).

    [82](#c1-note-0082a){#c1-note-0082}  Fred Turner, *From Counterculture
    to Cyberculture: Stewart Brand, the Whole Earth Movement and the Rise of
    Digital Utopianism* (Chicago, IL: University of Chicago Press, 2006), p.
    21. In this regard, see also the documentary films *Das Netz* by Lutz
    Dammbeck (2003) and *All Watched Over by Machines of Loving Grace* by
    Adam Curtis (2011).

    [83](#c1-note-0083a){#c1-note-0083}  It was possible to understand
    cybernetics as a language of free markets or also as one of centralized
    planned economies. See Slava Gerovitch, *From Newspeak to Cyberspeak: A
    History of Soviet Cybernetics* (Cambridge, MA: MIT Press, 2002). The
    great interest of Soviet scientists in cybernetics rendered the term
    rather suspicious in the West, where it was disassociated from
    artificial intelligence.[]{#Page_183 type="pagebreak" title="183"}

    [84](#c1-note-0084a){#c1-note-0084}  Claus Pias, "The Age of
    Cybernetics," in Pias (ed.), *Cybernetics: The Macy Conferences
    1946--1953* (Zurich: Diaphanes, 2016), pp. 11--27.

    [85](#c1-note-0085a){#c1-note-0085}  Norbert Wiener, one of the
    cofounders of cybernetics, explained this as follows in 1950: "In giving
    the definition of Cybernetics in the original book, I classed
    communication and control together. Why did I do this? When I
    communicate with another person, I impart a message to him, and when he
    communicates back with me he returns a related message which contains
    information primarily accessible to him and not to me. When I control
    the actions of another person, I communicate a message to him, and
    although this message is in the imperative mood, the technique of
    communication does not differ from that of a message of fact.
    Furthermore, if my control is to be effective I must take cognizance of
    any messages from him which may indicate that the order is understood
    and has been obeyed." Norbert Wiener, *The Human Use of Human Beings:
    Cybernetics and Society*, 2nd edn (London: Free Association Books,
    1989), p. 16.

    [86](#c1-note-0086a){#c1-note-0086}  Though presented here as distinct,
    these interests could in fact be held by one and the same person. In
    *From Counterculture to Cyberculture*, for instance, Turner discusses
    "countercultural entrepreneurs."

    [87](#c1-note-0087a){#c1-note-0087}  Richard Brautigan, "All Watched
    Over by Machines of Loving Grace," in *All Watched Over by Machines of
    Loving Grace*, by Brautigan (San Francisco: The Communication Company,
    1967).

    [88](#c1-note-0088a){#c1-note-0088}  David D. Clark, "A Cloudy Crystal
    Ball: Visions of the Future," *Internet Engineering Taskforce* (July
    1992), online.

    [89](#c1-note-0089a){#c1-note-0089}  Castells, *The Rise of the Network
    Society*.

    [90](#c1-note-0090a){#c1-note-0090}  Bill Gates, "An Open Letter to
    Hobbyists," *Homebrew Computer Club Newsletter* 2/1 (1976): 2.

    [91](#c1-note-0091a){#c1-note-0091}  Richard Stallman, "What Is Free
    Software?", *GNU Operating System*, online.

    [92](#c1-note-0092a){#c1-note-0092}  The fundamentally cooperative
    nature of programming was recognized early on. See Gerald M. Weinberg,
    *The Psychology of Computer Programming*, rev. edn (New York: Dorset
    House, 1998 \[originally published in 1971\]).

    [93](#c1-note-0093a){#c1-note-0093}  On the history of free software,
    see Volker Grassmuck, *Freie Software: Zwischen Privat- und
    Gemeineigentum* (Berlin: Bundeszentrale für politische Bildung, 2002).

    [94](#c1-note-0094a){#c1-note-0094}  In his first email on the topic, he
    wrote: "Hello everybody out there \[...\]. I'm doing a (free) operating
    system (just a hobby, won\'t be big and professional like gnu) \[...\].
    This has been brewing since April, and is starting to get ready. I\'d
    like any feedback on things people like/dislike." Linus Torvalds, "What
    []{#Page_184 type="pagebreak" title="184"}Would You Like to See Most in
    Minix," *Usenet Group* (August 1991), online.

    [95](#c1-note-0095a){#c1-note-0095}  ARD/ZDF, "Onlinestudie" (2015),
    online.

    [96](#c1-note-0096a){#c1-note-0096}  From 1997 to 2003, the average use
    of online media in Germany climbed from 76 to 138 minutes per day, and
    by 2013 it reached 169 minutes. Over the same span of time, the average
    frequency of use increased from 3.3 to 4.4 days per week, and by 2013 it
    was 5.8. From 2007 to 2013, the percentage of people who were members of
    private social networks like Facebook grew from 15 percent to 46
    percent. Of these, nearly 60 percent -- around 19 million people -- used
    such services on a daily basis. The source of this information is the
    article cited in the previous note.

    [97](#c1-note-0097a){#c1-note-0097}  "Internet Access Is 'a Fundamental
    Right'," *BBC News* (8 March 2010), online.

    [98](#c1-note-0098a){#c1-note-0098}  Manuel Castells, *The Power of
    Identity* (Oxford: Blackwell, 1997), pp. 7--22.
    :::
    :::

    [II]{.chapterNumber} [Forms]{.chapterTitle} {#c2}

    ::: {.section}
    With the emergence of the internet around the turn of the millennium as
    an omnipresent infrastructure for communication and coordination,
    previously independent cultural developments began to spread beyond
    their specific original contexts, mutually influencing and enhancing one
    another, and becoming increasingly intertwined. Out of a disconnected
    conglomeration of more or less marginalized practices, a new and
    specific cultural environment thus took shape, usurping or marginalizing
    an ever greater variety of cultural constellations. The following
    discussion will focus on three *forms* of the digital condition; that
    is, on those formal qualities that (notwithstanding all of its internal
    conflicts and contradictions) lend a particular shape to this cultural
    environment as a whole: *referentiality*, *communality*, and
    *algorithmicity*. It is only because most of the cultural processes
    operating under the digital condition are characterized by common formal
    features such as these that it is reasonable to speak of the digital
    condition in the singular.

    "Referentiality" is a method with which individuals can inscribe
    themselves into cultural processes and constitute themselves as
    producers. Understood as shared social meaning, the arena of culture
    entails that such an undertaking cannot be limited to the individual.
    Rather, it takes place within a larger framework whose existence and
    development depend on []{#Page_58 type="pagebreak" title="58"}communal
    formations. "Algorithmicity" denotes those aspects of cultural processes
    that are (pre-)arranged by the activities of machines. Algorithms
    transform the vast quantities of data and information that characterize
    so many facets of present-day life into dimensions and formats that can
    be registered by human perception. It is impossible to read the content
    of billions of websites. Therefore we turn to services such as Google\'s
    search algorithm, which reduces the data flood ("big data") to a
    manageable amount and translates it into a format that humans can
    understand ("small data"). Without them, human beings could not
    comprehend or do anything within a culture built around digital
    technologies, but they influence our understanding and activity in an
    ambivalent way. They create new dependencies by pre-sorting and making
    the (informational) world available to us, yet simultaneously ensure our
    autonomy by providing the preconditions that enable us to act.
    :::

    ::: {.section}
    Referentiality {#c2-sec-0002}
    --------------

    In the digital condition, one of the methods (if not *the* most
    fundamental method) enabling humans to participate -- alone or in groups
    -- in the collective negotiation of meaning is the system of creating
    references. In a number of arenas, referential processes play an
    important role in the assignment of both meaning and form. According to
    the art historian André Rottmann, for instance, "one might claim that
    working with references has in recent years become the dominant
    production-aesthetic model in contemporary
    art."[^1^](#c2-note-0001){#c2-note-0001a} This burgeoning engagement
    with references, however, is hardly restricted to the world of
    contemporary art. Referentiality is a feature of many processes that
    encompass the operations of various genres of professional and everyday
    culture. In its essence, it is the use of materials that are already
    equipped with meaning -- as opposed to so-called raw material -- to
    create new meanings. The referential techniques used to achieve this are
    extremely diverse, a fact reflected in the numerous terms that exist to
    describe them: re-mix, re-make, re-enactment, appropriation, sampling,
    meme, imitation, homage, tropicália, parody, quotation, post-production,
    re-performance, []{#Page_59 type="pagebreak" title="59"}camouflage,
    (non-academic) research, re-creativity, mashup, transformative use, and
    so on.

    These processes have two important aspects in common: the
    recognizability of the sources and the freedom to deal with them however
    one likes. The first creates an internal system of references from which
    meaning and aesthetics are derived in an essential
    manner.[^2^](#c2-note-0002){#c2-note-0002a} The second is the
    precondition enabling the creation of something that is both new and on
    the same level as the re-used material. This represents a clear
    departure from the historical--critical method, which endeavors to embed
    a source in its original context in order to re-determine its meaning,
    but also a departure from classical forms of rendition such as
    translations, adaptations (for instance, adapting a book for a film), or
    cover versions, which, though they translate a work into another
    language or medium, still attempt to preserve its original meaning.
    Re-mixes produced by DJs are one example of the referential treatment of
    source material. In his book on the history of DJ culture, the
    journalist Ulf Poschardt notes: "The remixer isn\'t concerned with
    salvaging authenticity, but with creating a new
    authenticity."[^3^](#c2-note-0003){#c2-note-0003a} For instead of
    distancing themselves from the past, which would follow the (Western)
    logic of progress or the spirit of the avant-garde, these processes
    refer explicitly to precursors and to existing material. In one and the
    same gesture, both one\'s own new position and the context and cultural
    tradition that is being carried on in one\'s own work are constituted
    performatively; that is, through one\'s own activity in the moment. I
    will discuss this phenomenon in greater depth below.

    To work with existing cultural material is, in itself, nothing new. In
    modern montages, artists likewise drew upon available texts, images, and
    treated materials. Yet there is an important difference: montages were
    concerned with bringing together seemingly incongruous but stable
    "finished pieces" in a more or less unmediated and fragmentary manner.
    This is especially clear in the collages by the Dadaists or in
    Expressionist literature such as Alfred Döblin\'s *Berlin
    Alexanderplatz*. In these works, the experience of Modernity\'s many
    fractures -- its fragmentation and turmoil -- was given a new aesthetic
    form. In his reference to montages, Adorno thus observed that the
    "negation of synthesis becomes a principle []{#Page_60 type="pagebreak"
    title="60"}of form."[^4^](#c2-note-0004){#c2-note-0004a} At least for a
    brief moment, he considered them an adequate expression for the
    impossibility of reconciling the contradictions of capitalist culture.
    Influenced by Adorno, the literary theorist Peter Bürger went so far as
    to call the montage the true "paradigm of
    modernity."[^5^](#c2-note-0005){#c2-note-0005a} In today\'s referential
    processes, on the contrary, pieces are not brought together as much as
    they are integrated into one another by being altered, adapted, and
    transformed. Unlike the older arrangement, it is not the fissures
    between elements that are foregrounded but rather their synthesis in the
    present. Conchita Wurst, the bearded diva, is not torn between two
    conflicting poles. Rather, she represents a successful synthesis --
    something new and harmonious that distinguishes itself by showcasing
    elements of the old order (man/woman) and simultaneously transcending
    them.

    This synthesis, however, is usually just temporary, for at any time it
    can itself serve as material for yet another rendering. Of course, this
    is far easier to pull off with digital objects than with analog objects,
    though these categories have become increasingly porous and thus
    increasingly problematic as opposites. More and more objects exist both
    in an analog and in a digital form. Think of photographs and slides,
    which have become so easy to digitalize. Even three-dimensional objects
    can now be scanned and printed. In the future, programmable materials
    with controllable and reversible features will cause the difference
    between the two domains to vanish: analog is becoming more and more
    digital.

    Montages and referential processes can only become widespread methods
    if, in a given society, cultural objects are available in three
    different respects. The first is economic and organizational: they must
    be affordable and easily accessible. Whoever is unable to afford books
    or get hold of them by some other means will not be able to reconfigure
    any texts. The second is cultural: working with cultural objects --
    which can always create deviations from the source in unpredictable ways
    -- must not be treated as taboo or illegal, but rather as an everyday
    activity without any special preconditions. It is much easier to
    manipulate a text from a secular newspaper than one from a religious
    canon. The third is material: it must be possible to use the material
    and to change it.[^6[]{#Page_61 type="pagebreak"
    title="61"}^](#c2-note-0006){#c2-note-0006a}

    In terms of this third form of availability, montages differ from
    referential processes, for cultural objects can be integrated into one
    another -- instead of simply being placed side by side -- far more
    readily when they are digitally coded. Information is digitally coded
    when it is stored by means of a limited system of discrete (that is,
    separated by finite intervals or distances) signs that are meaningless
    in themselves. This allows information to be copied from one carrier to
    another without any loss and it allows the respective signs, whether
    individually or in groups, to be arranged freely. Seen in this way,
    digital coding is not necessarily bound to computers but can rather be
    realized with all materials: a mosaic is a digital process in which
    information is coded by means of variously colored tiles, just as a
    digital image consists of pixels. In the case of the mosaic, of course,
    the resolution is far lower. Alphabetic writing is a form of coding
    linguistic information by means of discrete signs that are, in
    themselves, meaningless. Consequently, Florian Cramer has argued that
    "every form of literature that is recorded alphabetically and not based
    on analog parameters such as ideograms or orality is already digital in
    that it is stored in discrete
    signs."[^7^](#c2-note-0007){#c2-note-0007a} However, the specific
    features of the alphabet, as Marshall McLuhan repeatedly underscored,
    did not fully develop until the advent of the printing
    press.[^8^](#c2-note-0008){#c2-note-0008a} It was the printing press, in
    other words, that first abstracted written signs from analog handwriting
    and transformed them into standardized symbols that could be repeated
    without any loss of information. In this practical sense, the printing
    press made writing digital, with the result that dealing with texts soon
    became radically different.

    ::: {.section}
    ### Information overload 1.0 {#c2-sec-0003}

    The printing press made texts available in the three respects mentioned
    above. For one thing, their number increased rapidly, while their price
    significantly sank. During the first two generations after Gutenberg\'s
    invention -- that is, between 1450 and 1500 -- more books were produced
    than during the thousand years
    before.[^9^](#c2-note-0009){#c2-note-0009a} And that was just the
    beginning. Dealing with books and their content changed from the ground
    up. In manuscript culture, every new copy represented a potential
    degradation of the original, and therefore []{#Page_62 type="pagebreak"
    title="62"}the oldest sources (those that had undergone as little
    corruption as possible) were valued above all. With the advent of print
    culture, the idea took hold that texts could be improved by the process
    of editing, not least because the availability of old sources, through
    reprints and facsimiles, had also improved dramatically. Pure
    reproduction was mechanized and overcome as a cultural challenge.

    According to the historian Elizabeth Eisenstein, one of the first
    consequences of the greatly increased availability of the printed book
    was that it overcame the "tyranny of major authorities, which was common
    in small libraries."[^10^](#c2-note-0010){#c2-note-0010a} Scientists
    were now able to compare texts with one another and critique them to an
    unprecedented extent. Their general orientation turned around: instead
    of looking back in order to preserve what they knew, they were now
    looking ahead toward what they might not (yet) know.

    In order to organize this information flood of rapidly amassing texts,
    it was necessary to create new conventions: books were now specified by
    their author, publisher, and date of publication, not to mention
    furnished with page numbers. This enabled large numbers of texts to be
    catalogued and every individual text -- indeed, every single passage --
    to be referenced.[^11^](#c2-note-0011){#c2-note-0011a} Scientists could
    legitimize the pursuit of new knowledge by drawing attention to specific
    mistakes or gaps in existing texts. In the scientific culture that was
    developing at the time, the close connection between old and new
    ma­terial was not simply regarded as something positive; it was also
    urgently prescribed as a method of argumentation. Every text had to
    contain an internal system of references, and this was the basis for the
    development of schools, disciplines, and specific discourses.

    The digital character of printed writing also made texts available in
    the third respect mentioned above. Because discrete signs could be
    reproduced without any loss of information, it was possible not only to
    make perfect copies but also to remove content from one carrier and
    transfer it to another. Materials were no longer simply arranged
    sequentially, as in medieval compilations and almanacs, but manipulated
    to give rise to a new and independent fluid text. A set of conventions
    was developed -- one that remains in use today -- for modifying embedded
    or quoted material in order for it []{#Page_63 type="pagebreak"
    title="63"}to fit into its new environment. In this manner, quotations
    could be altered in such a way that they could be integrated seamlessly
    into a new text while remaining recognizable as direct citations.
    Several of these conventions, for instance the use of square brackets to
    indicate additions ("\[ \]") or ellipses to indicate omissions ("..."),
    are also used in this very book. At the same time, the conventions for
    making explicit references led to the creation of an internal reference
    system that made the singular position of the new text legible within a
    collective field of work. "Printing," to quote Elizabeth Eisenstein once
    again, "encouraged forms of combinatory activity which were social as
    well as intellectual. It changed relationships between men of learning
    as well as between systems of
    ideas."[^12^](#c2-note-0012){#c2-note-0012a} Exchange between scholars,
    in the form of letters and visits, intensified. The seventeenth century
    saw the formation of the *respublica literaria* or the "Republic of
    Letters," a loose network of scholars devoted to promoting the ideas of
    the Enlightenment. Beginning in the eighteenth century, the rapidly
    growing number of scientific fields was arranged and institutionalized
    into clearly distinct disciplines. In the nineteenth and twentieth
    centuries, diverse media-technical innovations made images, sounds, and
    moving images available, though at first only in analog formats. These
    created the preconditions that enabled the montage in all of its forms
    -- film cuts, collages, readymades, *musique concrète*, found-footage
    films, literary cut-ups, and artistic assemblages (to name only the
    best-known genres) -- to become the paradigm of Modernity.
    :::

    ::: {.section}
    ### Information overload 2.0 {#c2-sec-0004}

    It was not until new technical possibilities for recording, storing,
    processing, and reproduction appeared over the course of the 1990s that
    it also became increasingly possible to code and edit images, audio, and
    video digitally. Through the networking that was taking place not far
    behind, society was flooded with an unprecedented amount of digit­ally
    coded information *of every sort*, and the circulation of this
    information accelerated. This was not, however, simply a quantitative
    change but also and above all a qualitative one. Cultural materials
    became available in a comprehensive []{#Page_64 type="pagebreak"
    title="64"}sense -- economically and organizationally, culturally
    (despite legal problems), and materially (because digitalized). Today it
    would not be bold to predict that nearly every text, image, or sound
    will soon exist in a digital form. Most of the new reproducible works
    are already "born digital" and digit­ally distributed, or they are
    physically produced according to digital instructions. Many initiatives
    are working to digitalize older, analog works. We are now anchored in
    the digital.

    Among the numerous digitalization projects currently under way, the most
    ambitious is that of Google Books, which, since its launch in 2004, has
    digitalized around 20 million books from the collections of large
    libraries and prepared them for full-text searches. Right from the
    start, a fierce debate arose about the legal and cultural acceptability
    of this project. One concern was whether Google\'s process infringed
    upon the rights of the authors and publishers of the scanned books or
    whether, according to American law, it qualified as "fair use," in which
    case there would be no obligation for the company to seek authorization
    or offer compensation. The second main concern was whether it would be
    culturally or politically appropriate for a private corporation to hold
    a de facto monopoly over the digital heritage of book culture. The first
    issue incited a complex legal battle that, in 2013, was decided in
    Google\'s favor by a judge on the United States District Court in New
    York.[^13^](#c2-note-0013){#c2-note-0013a} At the heart of the second
    issue was the question of how a public library should look in the
    twenty-first century.[^14^](#c2-note-0014){#c2-note-0014a} In November
    of 2008, the European Commission and the cultural minister of the
    European Union launched the virtual Europeana library, which occurred
    after a number of European countries had already invested hundreds of
    millions of euros in various digitalization
    initiatives.[^15^](#c2-note-0015){#c2-note-0015a} Today, Europeana
    serves as a common access point to the online archives of around 2,500
    European cultural institutions. By the end of 2015, its digital holdings
    had grown to include more than 40 million objects. This is still,
    however, a relatively small number, for it has been estimated that
    European archives and museums contain more than 220 million
    natural-historical and more than 260 million cultural-historical
    objects. In the United States, discussions about the future of libraries
    []{#Page_65 type="pagebreak" title="65"}led to the 2013 launch of the
    Digital Public Library of America (DPLA), which, like Europeana,
    provides common access to the digitalized holdings of archives, museums,
    and libraries. By now, more than 14 million items can be viewed there.

    In one way or another, however, both the private and the public projects
    of this sort have been limited by binding copyright laws. The librarian
    and book historian Robert Darnton, one of the most prominent advocates
    of the Digital Public Library of America, has accordingly stated: "The
    main impediment to the DPLA\'s growth is legal, not financial. Copyright
    laws could exclude everything published after 1964, most works published
    after 1923, and some that go back as far as
    1873."[^16^](#c2-note-0016){#c2-note-0016a} The legal situation in
    Europe is similar to that in the United States. It, too, massively
    obstructs the work of public
    institutions.[^17^](#c2-note-0017){#c2-note-0017a} In many cases, this
    has had the absurd consequence that certain materials, though they have
    been fully digitalized, may only be accessed in part or exclusively
    inside the facilities of a particular institution. Whereas companies
    such as Google can afford to wage long legal battles, and in the
    meantime create precedents, public institutions must proceed with great
    caution, not least to avoid the accusation of using public funds to
    violate copyright laws. Thus, they tend to fade into the background and
    leave users, who are unfamiliar with the complex legal situation, with
    the impression that they are even more out-of-date than they often are.

    Informal actors, who explicitly operate beyond the realm of copyright
    law, are not faced with such restrictions. UbuWeb, for instance, which
    is the largest online archive devoted to the history of
    twentieth-century avant-garde art, was not created by an art museum but
    rather by the initiative of an individual artist, Kenneth Goldsmith.
    Since 1996, he has been collecting historically relevant materials that
    were no longer in distribution and placing them online for free and
    without any stipulations. He forgoes the process of obtaining the rights
    to certain works of art because, as he remarks on the website, "Let\'s
    face it, if we had to get permission from everyone on UbuWeb, there
    would be no UbuWeb."[^18^](#c2-note-0018){#c2-note-0018a} It would
    simply be too demanding to do so. Because he pursues the project without
    any financial interest and has saved so much []{#Page_66
    type="pagebreak" title="66"}from oblivion, his efforts have provoked
    hardly any legal difficulties. On the contrary, UbuWeb has become so
    important that Goldsmith has begun to receive more and more material
    directly from artists and their heirs, who would like certain works not
    to be forgotten. Nevertheless, or perhaps for this very reason,
    Goldsmith repeatedly stresses the instability of his archive, which
    could disappear at any moment if he loses interest in maintaining it or
    if something else happens. Users are therefore able to download works
    from UbuWeb and archive, on their own, whatever items they find most
    important. Of course, this fragility contradicts the idea of an archive
    as a place for long-term preservation. Yet such a task could only be
    undertaken by an institution that is oriented toward the long term.
    Because of the existing legal conditions, however, it is hardly likely
    that such an institution will come about.

    Whereas Goldsmith is highly adept at operating within a niche that not
    only tolerates but also accepts the violation of formal copyright
    claims, large websites responsible for the uncontrolled dissemination of
    digital content do not bother with such niceties. Their purpose is
    rather to ensure that all popular content is made available digitally
    and for free, whether legally or not. These sites, too, have experienced
    uninterrupted growth. By the end of 2015, dozens of millions of people
    were simultaneously using the BitTorrent tracker The Pirate Bay -- the
    largest nodal point for file-sharing networks during the last decade --
    to exchange several million digital files with one
    another.[^19^](#c2-note-0019){#c2-note-0019a} And this was happening
    despite protracted attempts to block or close down the file-sharing site
    by legal means and despite a variety of competing services. Even when
    the founders of the website were sentenced in Sweden to pay large fines
    (around €3 million) and to serve time in prison, the site still did not
    disappear from the internet.[^20^](#c2-note-0020){#c2-note-0020a} At the
    same time, new providers have entered the market of free access; their
    method is not to facilitate distributed downloads but rather to offer,
    on account of the drastically reduced cost of data transfers, direct
    streaming. Although some of these services are relatively easy to locate
    and some have been legally banned -- the best-known case in Germany
    being that of the popular site kino.to -- more of them continue to
    appear.[^21^](#c2-note-0021){#c2-note-0021a} Moreover, this phenomenon
    []{#Page_67 type="pagebreak" title="67"}is not limited to music and
    films, but encompasses all media formats. For instance, it is
    foreseeable that the number of freely available plans for 3D objects
    will increase along with the popularity of 3D printing. It has almost
    escaped notice, however, that so-called "shadow libraries" have been
    popping up everywhere; the latter are not accessible to the public but
    rather to members, for instance, of closed exchange platforms or of
    university intranets. Few seminars take place any more without a corpus
    of scanned texts, regardless of whether this practice is legal or
    not.[^22^](#c2-note-0022){#c2-note-0022a}

    The lines between these different mechanisms of access are highly
    permeable. Content acquired legally can make its way to file-sharing
    networks as an illegal copy; content available for free can be sold in
    special editions; content from shadow libraries can make its way to
    publicly accessible sites; and, conversely, content that was once freely
    available can disappear into shadow libraries. As regards free access,
    the details of this rapidly changing landscape are almost
    inconsequential, for the general trend that has emerged from these
    various dynamics -- legal and illegal, public and private -- is
    unambiguous: in a comprehensive and practical sense, cultural works of
    all sorts will become freely available despite whatever legal and
    technical restrictions might be in place. Whether absolutely all
    material will be made available in this way is not the decisive factor,
    at least not for the individual, for, as the German Library Association
    has stated, "it is foreseeable that non-digitalized material will
    increasingly escape the awareness of users, who have understandably come
    to appreciate the ubiquitous availability and more convenient
    processability of the digital versions of analog
    objects."[^23^](#c2-note-0023){#c2-note-0023a} In this context of excess
    information, it is difficult to determine whether a particular work or a
    crucial reference is missing, given that a multitude of other works and
    references can be found in their place.

    At the same time, prodigious amounts of new material are being produced
    that, before the era of digitalization and networks, never could have
    existed at all or never would have left the private sphere. An example
    of this is amateur photography. This is nothing new in itself; as early
    as 1899, Kodak was marketing its films and apparatus with the slogan
    "You press the button, we do the rest," and ever since, []{#Page_68
    type="pagebreak" title="68"}drawers and albums have been overflowing
    with photographs. With the advent of digitalization, however, certain
    economic and material limitations ceased to exist that, until then, had
    caused most private photographers to think twice about how many shots
    they wanted to take. After all, they had to pay for the film to be
    developed and then store the pictures somewhere. Cameras also became
    increasingly "intelligent," which improved the technical quality of
    photo­graphs. Even complex procedures such as increasing the level of
    detail or the contrast ratio -- the difference between an image\'s
    brightest and darkest points -- no longer require any specialized
    knowledge of photochemical processes in the darkroom. Today, such
    features are often pre-installed in many cameras as an option (high
    dynamic range). Ever since the introduction of built-in digital cameras
    for smartphones, anyone with such a device can take pictures everywhere
    and at any time and then store them digitally. Images can then be posted
    on online platforms and shared with others. By the middle of 2015,
    Flickr -- the largest but certainly not the only specialized platform of
    this sort -- had more than 112 million registered users participating in
    more than 2 million groups. Every user has access to free storage space
    for about half a million of his or her own pictures. At that point, in
    other words, the platform was equipped to manage more than 55 billion
    photographs. Around 3.5 million images were being uploaded every day,
    many of which could be accessed by anyone. This may seem like a lot, but
    in reality it is just a small portion of the pictures that are posted
    online on a daily basis. Around that same time -- again, the middle of
    2015 -- approximately 350 million pictures were being posted on Facebook
    *every day*. The total number of photographs saved there has been
    estimated to be 250 billion. In addition, there are also large platforms
    for professional "stock photos" (supplies of pre-produced images that
    are supposed to depict generic situations) and the databanks of
    professional agencies such Getty Images or Corbis. All of these images
    can be found easily and acquired quickly (though not always for free).
    Yet photography is not unique in this regard. In all fields, the number
    of cultural artifacts available to the public on specialized platforms
    has been increasing rapidly in recent years.[]{#Page_69 type="pagebreak"
    title="69"}
    :::

    ::: {.section}
    ### The great disorder {#c2-sec-0005}

    The old orders that had been responsible for filtering, organ­izing, and
    publishing cultural material -- culture industries, mass media,
    libraries, museums, archives, etc. -- are incapable of managing almost
    any aspect of this deluge. They can barely function as gatekeepers any
    more between those realms that, with their help, were once defined as
    "private" and "public." Their decisions about what is or is not
    important matter less and less. Moreover, having already been subjected
    to a decades-long critique, their rules, which had been relatively
    binding and formative over long periods of time, are rapidly losing
    practical significance.

    Even Europeana, a relatively small project based on trad­itional museums
    and archives and with a mandate to make the European cultural heritage
    available online, has contributed to the disintegration of established
    orders: it indiscriminately brings together 2,500 previously separated
    institutions. The specific semantic contexts that formerly shaped the
    history and orientation of institutions have been dissolved or reduced
    to dry meta-data, and millions upon millions of cultural artifacts are
    now equidistant from one another. Instead of certain artifacts being
    firmly anchored in a location, for instance in an ethnographic
    collection devoted to the colonial history of France, it is now possible
    for everything to exist side by side. Europeana is not an archive in the
    traditional sense, or even a museum with a fixed and meaningful order;
    rather, it is just a standard database. Everything in it is just one
    search request away, and every search generates a unique order in the
    form of a sequence of visible artifacts. As a result, individual objects
    are freed from those meta-narratives, created by the museums and
    archives that preserve them, which situate them within broader contexts
    and assign more or less clear meanings to them. They consequently become
    more open to interpretation. A search result does not articulate an
    interpretive field of reference but merely a connection, created by
    constantly changing search algorithms, between a request and the corpus
    of material, which is likewise constantly changing.

    Precisely because it offers so many different approaches to more or less
    freely combinable elements of information, []{#Page_70 type="pagebreak"
    title="70"}the order of the database no longer really provides a
    framework for interpreting search results in a meaningful way.
    Al­together, the meaning of many objects and signs is becoming even more
    uncertain. On the one hand, this is because the connection to their
    original context is becoming fragile; on the other hand, it is because
    they can appear in every possible combination and in the greatest
    variety of reception contexts. In less official archives and in less
    specialized search engines, the dissolution of context is far more
    pronounced than it is in the case of the Europeana project. For the sake
    of orienting its users, for instance, YouTube provides the date when a
    video has been posted, but there is no indication of when a video was
    actually produced. Further information provided about a video, for
    example in the comments section, is essentially unreliable. It might be
    true -- or it might not. The internet researcher David Weinberger has
    called this the "new digital disorder," which, at least for many users,
    is an entirely apt description.[^24^](#c2-note-0024){#c2-note-0024a} For
    individuals, this disorder has created both the freedom to establish
    their own orders and the obligation of doing so, regardless of whether
    or not they are ready for the task.

    This tension between freedom and obligation is at its strongest online,
    where the excess of culture and its more or less free availability are
    immediate and omnipresent. In fact, everything that can be retrieved
    online is culture in the sense that everything -- from the deepest layer
    of hardware to the most superficial tweet -- has been made by someone
    with a particular intention, and everything has been made to fit a
    particular order. And it is precisely this excess of often contradictory
    meanings and limited, regional, and incompatible orders that leads to
    disorder and meaninglessness. This is not limited to the online world,
    however, because the latter is not self-contained. In an essential way,
    digital media also serve to organize the material world. On the basis of
    extremely complex and opaque yet highly efficient logistical and
    production processes, people are also confronted with constantly
    changing material things about whose origins and meanings they have
    little idea. Even something as simple to produce as yoghurt usually has
    a thousand kilometers behind it before it ends up on a shelf in the
    supermarket. The logistics that enable this are oriented toward
    flexibility; []{#Page_71 type="pagebreak" title="71"}they bring elements
    together as efficiently as possible. It is nearly impossible for final
    customers to find out anything about the ingredients. Customers are
    merely supposed to be oriented by signs and notices such as "new" or "as
    before," "natural," and "healthy," which are written by specialists and
    meant to manipulate shoppers as much as the law allows. Even here, in
    corporeal everyday life, every individual has to deal with a surge of
    excess and disorder that threatens to erode the original meaning
    conferred on every object -- even where such meaning was once entirely
    unproblematic, as in the case of
    yoghurt.[^25^](#c2-note-0025){#c2-note-0025a}
    :::

    ::: {.section}
    ### Selecting and organizing {#c2-sec-0006}

    In this situation, the creation of one\'s own system of references has
    become a ubiquitous and generally accessible method for organizing all
    of the ambivalent things that one encounters on a given day. Such things
    are thus arranged within a specific context of meaning that also
    (co)determines one\'s own relation to the world and subjective position
    in it. Referentiality takes place through three types of activity, the
    first being simply to attract attention to certain things, which affirms
    (at least implicitly) that they are important. With every single picture
    posted on Flickr, every tweet, every blog post, every forum post, and
    every status update, the user is doing exactly that; he or she is
    communicating to others: "Look over here! I think this is important!" Of
    course, there is nothing new to filtering and allocating meaning. What
    is new, however, is that these processes are no longer being carried out
    primarily by specialists at editorial offices, museums, or archives, but
    have become daily requirements for a large portion of the population,
    regardless of whether they possess the material and cultural resources
    that are necessary for the task.
    :::

    ::: {.section}
    ### The loop through the body {#c2-sec-0007}

    Given the flood of information that perpetually surrounds everyone, the
    act of focusing attention and reducing vast numbers of possibilities
    into something concrete has become a productive achievement, however
    banal each of these micro-activities might seem on its own, and even if,
    at first, []{#Page_72 type="pagebreak" title="72"}the only concern might
    be to focus the attention of the person doing it. The value of this
    (often very brief) activity is that it singles out elements from the
    uniform sludge of unmanageable complexity. Something plucked out in this
    way gains value because it has required the use of a resource that
    cannot be reproduced, that exists outside of the world of information
    and that is invariably limited for every individual: our own lifetime.
    Every status update that is not machine-generated means that someone has
    invested time, be it only a second, in order to point to this and not to
    something else. Thus, a process of validating what exists in the excess
    takes place in connection with the ultimate scarcity -- our own
    lifetimes, our own bodies. Even if the value generated by this act is
    minimal or diffuse, it is still -- to borrow from Gregory Bateson\'s
    famous definition of information -- a difference that makes a difference
    in this stream of equivalencies and
    meaninglessness.[^26^](#c2-note-0026){#c2-note-0026a} This singling out
    -- this use of one\'s own body to generate meaning -- does not, however,
    take place by means of mere micro-activities throughout the day; it is
    also a defining aspect of complex cultural strategies. In recent years,
    re-enactment (that is, the re-staging of historical situ­ations and
    events) has established itself as a common practice in contemporary art.
    Unlike traditional re-enactments, such as those of historically
    significant battles, which attempt to represent the past as faithfully
    as possible, "artistic re-enactments," according to the curator Inke
    Arns, "are not an affirmative confirmation of the past; rather, they are
    *questionings* of the present through reaching back to historical
    events," especially as they are represented in images and other forms of
    documentation. Thanks to search engines and databases, such
    representations are more or less always present, though in the form of
    indeterminate images, ambivalent documents, and contentious
    interpretations. Artists in this situation, as Arns explains,

    ::: {.extract}
    do not ask the naïve question about what really happened outside of the
    history represented in the media -- the "authenticity" beyond the images
    -- instead, they ask what the images we see might mean concretely to us,
    if we were to experience these situations personally. In this way the
    artistic reenactment confronts the general feeling of insecurity about
    the meaning []{#Page_73 type="pagebreak" title="73"}of images by using a
    paradoxical approach: through erasing distance to the images and at the
    same time distancing itself from the
    images.[^27^](#c2-note-0027){#c2-note-0027a}
    :::

    This paradox manifests itself in that the images are appropriated and
    sublated through the use of one\'s own body in the re-enactments. They
    simultaneously refer to the past and create a new reality in the
    present. In perhaps the best-known re-enactment of this type, the artist
    Jeremy Deller revived, in 2001, the Battle of Orgreave, one of the
    central episodes of the British miners\' strike of 1984 and 1985. This
    historical event is regarded as a turning point in the protracted
    conflict between Margaret Thatcher\'s government and the labor unions --
    a key moment in the implementation of Great Britain\'s neoliberal
    regime, which is still in effect today. In Deller\'s re-enactment, the
    heart of the matter is not historical accuracy, which is always
    controversial in such epoch-changing events. Rather, he focuses on the
    former participants -- the miners and police officers alike, who, along
    with non-professional actors, lived through the situation again -- in
    order to explore both the distance from the events and their
    representation in the media, as well as their ongoing biographical and
    societal presence.[^28^](#c2-note-0028){#c2-note-0028a}

    Elaborate practices of embodying medial images through processes of
    appropriation and distancing have also found their way into popular
    culture, for instance in so-called "cosplay." The term, which is a
    contraction of the words "costume" and "play," was coined by a Japanese
    man named Nobuyuki Takahashi. In 1984, while attending the World Science
    Fiction Convention in Los Angeles, he used the word to describe the
    practice of certain attendees to dress up as their favorite characters.
    Participants in cosplay embody fictitious figures -- mostly from the
    worlds of science fiction, comics/manga, or computer games -- by donning
    home-made costumes and striking characteristic
    poses.[^29^](#c2-note-0029){#c2-note-0029a} The often considerable
    effort that goes into this is mostly reflected in the costumes, not in
    the choreography or dramaturgy of the performance. What is significant
    is that these costumes are usually not exact replicas but are rather
    freely adapted by each player to represent the character as he or she
    interprets it to be. Accordingly, "Cosplay is a form of appropriation
    []{#Page_74 type="pagebreak" title="74"}that transforms, actualizes and
    performs an existing story in close connection to the fan\'s own
    identity."[^30^](#c2-note-0030){#c2-note-0030a} This practice,
    admittedly, goes back quite far in the history of fan culture, but it
    has experienced a striking surge through the opportunity for fans to
    network with one another around the world, to produce costumes and
    images of professional quality, and to place themselves on the same
    level as their (fictitious) idols. By now it has become a global
    subculture whose members are active not only online but also at hundreds
    of conventions throughout the world. In Germany, an annual cosplay
    competition has been held since 2007 (it is organized by the Frankfurt
    Book Fair and Animexx, the country\'s largest manga and anime
    community). The scene, which has grown and branched out considerably
    over the past few years, has slowly begun to professionalize, with
    shops, books, and players who make paid appearances. Even in fan
    culture, stars are born. As soon as the subculture has exceeded a
    certain size, this gradual onset of commercialization will undoubtedly
    lead to tensions within the community. For now, however, two of its
    noteworthy features remain: the power of the desire to appropriate, in a
    bodily manner, characters from vast cultural universes, and the
    widespread combination of free interpretation and meticulous attention
    to detail.
    :::

    ::: {.section}
    ### Lineages and transformations {#c2-sec-0008}

    Because of the great effort tha they require, re-enactment and cosplay
    are somewhat extreme examples of singling out, appropriating, and
    referencing. As everyday activities that almost take place incidentally,
    however, these three practices usually do not make any significant or
    lasting differences. Yet they do not happen just once, but over and over
    again. They accumulate and thus constitute referentiality\'s second type
    of activity: the creation of connections between the many things that
    have attracted attention. In such a way, paths are forged through the
    vast complexity. These paths, which can be formed, for instance, by
    referring to different things one after another, likewise serve to
    produce and filter meaning. Things that can potentially belong in
    multiple contexts are brought into a single, specific context. For the
    individual []{#Page_75 type="pagebreak" title="75"}producer, this is how
    fields of attention, reference systems, and contexts of meaning are
    first established. In the third step, the things that have been selected
    and brought together are changed. Perhaps something is removed to modify
    the meaning, or perhaps something is added that was previously absent or
    unavailable. Either way, referential culture is always producing
    something new.

    These processes are applied both within individual works (referentiality
    in a strict sense) and within currents of communication that consist of
    numerous molecular acts (referentiality in a broader sense). This latter
    sort of compilation is far more widespread than the creation of new
    re-mix works. Consider, for example, the billionfold sequences of status
    updates, which sometimes involve a link to an interesting video,
    sometimes a post of a photograph, then a short list of favorite songs, a
    top 10 chart from one\'s own feed, or anything else. Such methods of
    inscribing oneself into the world by means of references, combinations,
    or alterations are used to create meaning through one\'s own activity in
    the world and to constitute oneself in it, both for one\'s self and for
    others. In a culture that manifests itself to a great extent through
    mediatized communication, people have to constitute themselves through
    such acts, if only by posting
    "selfies."[^31^](#c2-note-0031){#c2-note-0031a} Not to do so would be to
    risk invisibility and being forgotten.

    On this basis, a genuine digital folk culture of re-mixing and mashups
    has formed in recent years on online platforms, in game worlds, but also
    through cultural-economic productions of individual pieces or short
    series. It is generated and maintained by innumerable people with
    varying degrees of intensity and ambition. Its common feature with
    trad­itional folk culture, in choirs or elsewhere, is that production
    and reception (but also reproduction and creation) largely coincide.
    Active participation admittedly requires a certain degree of
    proficiency, interest, and engagement, but usually not any extraordinary
    talent. Many classical institutions such as museums and archives have
    been attempting to take part in this folk culture by setting up their
    own re-mix services. They know that the "public" is no longer able or
    willing to limit its engagement with works of art and cultural history
    to one of quiet contemplation. At the end of 2013, even []{#Page_76
    type="pagebreak" title="76"}the Deutsches Symphonie-Orchester Berlin
    initiated a re-mix competition. A year earlier, the Rijksmuseum in
    Amsterdam launched so-called "Rijksstudios." Since then, the museum has
    made available on its website more than 200,000 high-resolution images
    from its collection. Users are free to use these to create their own
    re-mixes online and share them with others. Interestingly, the
    Rijksmuseum does not distinguish between the work involved in
    transforming existing pieces and that involved in curating its own
    online gallery.

    Referential processes have no beginning and no end. Any material that is
    used to make something new has a pre-history of its own, even if its
    traces are lost in clouds of uncertainty. Upon closer inspection, this
    cloud might clear a little bit, but it is extremely uncommon for a
    genuine beginning -- a *creatio ex nihilo* -- to be revealed. This
    raises the question of whether there can really be something like
    originality in the emphatic sense.[^32^](#c2-note-0032){#c2-note-0032a}
    Regardless of the answer to this question, the fact that by now many
    people select, combine, and alter objects on a daily basis has led to a
    slow shift in our perception and sensibilities. In light of the
    experiences that so many people are creating, the formerly exotic
    theories of deconstruction suddenly seem anything but outlandish. Nearly
    half a century ago, Roland Barthes defined the text as a fabric of
    quotations, and this incited vehement
    opposition.[^33^](#c2-note-0033){#c2-note-0033a} "But of course," one
    would be inclined to say today, "that can be statistically proven
    through software analysis!" Amazon identifies books by means of their
    "statistically improbable phrases"; that is, by means of textual
    elements that are highly unlikely to occur elsewhere. This implies, of
    course, that books contain many textual elements that are highly likely
    to be found in other texts, without suggesting that such elements would
    have to be regarded as plagiarism.

    In the Gutenberg Galaxy, with its fixation on writing, the earliest
    textual document is usually understood to represent a beginning. If no
    references to anything before can be identified, the text is then
    interpreted as a closed entity, as a new text. Thus, fairy tales and
    sagas, which are typical elements of oral culture, are still more
    strongly associated with the names of those who recorded them than with
    the names of those who narrated them. This does not seem very convincing
    today. In recent years, literary historians have made strong []{#Page_77
    type="pagebreak" title="77"}efforts to shift the focus of attention to
    the people (mostly women) who actually told certain fairy tales. In
    doing so, they have been able to work out to what extent the respective
    narrators gave shape to specific stories, which were written down as
    common versions, and to what extent these stories reflect their
    narrators\' personal histories.[^34^](#c2-note-0034){#c2-note-0034a}

    Today, after more than 40 years of deconstructionist theory and a change
    in our everyday practices, it is no longer controversial to read works
    -- even by canonical figures like Wagner or Mozart -- in such a way as
    to highlight the other works, either by the artists in question or by
    other artists, that are contained within
    them.[^35^](#c2-note-0035){#c2-note-0035a} This is not an expression of
    decreased appreciation but rather an indication that, as Zygmunt Bauman
    has stressed, "The way human beings understand the world tends to be at
    all times *praxeomorphic*: it is always shaped by the know-how of the
    day, by what people can do and how they usually go about doing
    it."[^36^](#c2-note-0036){#c2-note-0036a} And the everyday practice of
    today is one of singling out, bringing together, altering, and adding.
    Accordingly, not only has our view of current cultural production
    shifted; our view of cultural history has shifted as well. As always,
    the past is made to suit the sensibilities of the present.

    As a rule, however, things that have no beginning also have no end. This
    is not only because they can in turn serve as elements for other new
    contexts of meaning, but also because the attention paid to the context
    in which they take on specific meaning is sensitive to the work that has
    to be done to maintain the context itself. Even timelessness is an
    elaborate everyday business. The attempt to rescue works of art from the
    ravages of time -- to preserve them forever -- means that they regularly
    need to be restored. Every restoration inevit­ably stirs a debate about
    whether the planned interventions are appropriate and about how to deal
    with the traces of previous interventions, which, from the current
    perspective, often seem to be highly problematic. Whereas, just a
    generation ago, preservationists ensured that such interventions
    remained visible (as articulations of the historical fissures that are
    typical of Modernity), today greater emphasis is placed on reducing
    their visibility and re-creating the illusion of an "original condition"
    (without, however, impeding any new functionality that a piece might
    have in the present). []{#Page_78 type="pagebreak" title="78"}The
    historically faithful restoration of the Berlin City Palace, and yet its
    repurposed function as a museum and meeting place, are typical of this
    new attitude in dealing with our historical heritage.

    In everyday activity, too, the never-ending necessity of this work can
    be felt at all times. Here the issue is not timelessness, but rather
    that the established contexts of meaning quickly become obsolete and
    therefore have to be continuously affirmed, expanded, and changed in
    order to maintain the relevance of the field that they define. This
    lends referentiality a performative character that combines productive
    and reproductive dimensions. That which is not constantly used and
    renewed simply disappears. Often, however, this only means that it will
    sink into an endless archive and become unrealized potential until
    someone reactivates it, breathes new life into it, rouses it from its
    slumber, and incorporates it into a newly relevant context of meaning.
    "To be relevant," according to the artist Eran Schaerf, "things must be
    recyclable."[^37^](#c2-note-0037){#c2-note-0037a}

    Alone, everyone is overwhelmed by the task of having to generate meaning
    against this backdrop of all-encompassing meaninglessness. First, the
    challenge is too great for any individual to overcome; second, meaning
    itself is only created intersubjectively. While it can admittedly be
    asserted by a single person, others have to confirm it before it can
    become a part of culture. For this reason, the actual subject of
    cultural production under the digital condition is not the individual
    but rather the next-largest unit.
    :::
    :::

    ::: {.section}
    Communality {#c2-sec-0009}
    -----------

    As an individual, it is impossible to orient oneself within a complex
    environment. Meaning -- as well as the ability to act -- can only be
    created, reinforced, and altered in exchange with others. This is
    nothing noteworthy; biologically and culturally, people are social
    beings. What has changed historically is how people are integrated into
    larger contexts, how processes of exchange are organized, and what every
    individual is expected to do in order to become a fully fledged
    participant in these processes. For nearly 50 years, traditional
    []{#Page_79 type="pagebreak" title="79"}institutions -- that is,
    hierarchically and bureaucratically organ­ized civic institutions such
    as established churches, labor unions, and political parties -- have
    continuously been losing members.[^38^](#c2-note-0038){#c2-note-0038a}
    In tandem with this, the overall commitment to the identities, family
    values, and lifestyles promoted by these institutions has likewise been
    in decline. The great mech­anisms of socialization from the late stages
    of the Gutenberg Galaxy have been losing more and more of their
    influence, though at different speeds and to different extents. All
    told, however, explicitly and collectively normative impulses are
    decreasing, while others (implicitly economic, above all) are on the
    rise. According to mainstream sociology, a cause or consequence of this
    is the individualization and atomization of society. As early as the
    middle of the 1980s, Ulrich Beck claimed: "In the individualized society
    the individual must therefore learn, on pain of permanent disadvantage,
    to conceive of himself or herself as the center of action, as the
    planning office with respect to his/her own biography, abil­ities,
    orientations, relationships and so
    on."[^39^](#c2-note-0039){#c2-note-0039a} Over the past three decades,
    the dominant neoliberal political orientation, with its strong stress on
    the freedom of the individual -- to realize oneself as an individual
    actor in the allegedly open market and in opposition to allegedly
    domineering collective mechanisms -- has radicalized these tendencies
    even further. The ability to act, however, is not only a question of
    one\'s personal attitude but also of material resources. And it is this
    same neoliberal politics that deprives so many people of the resources
    needed to take advantage of these new freedoms in their own lives. As a
    result they suffer, in Ulrich Beck\'s terms, "permanent disadvantage."

    Under the digital condition, this process has permeated the finest
    structures of social life. Individualization, commercialization, and the
    production of differences (through design, for instance) are ubiquitous.
    Established civic institutions are not alone in being hollowed out;
    relatively new collectives are also becoming more differentiated, a
    development that I outlined above with reference to the transformation
    of the gay movement into the LGBT community. Yet nevertheless, or
    perhaps for this very reason, new forms of communality are being formed
    in these offshoots -- in the small activities of everyday life. And
    these new communal formations -- rather []{#Page_80 type="pagebreak"
    title="80"}than individual people -- are the actual subjects who create
    the shared meaning that we call culture.

    ::: {.section}
    ### The problem of the "community" {#c2-sec-0010}

    I have chosen the rather cumbersome expression "communal formation" in
    order to avoid the term "community" (*Gemeinschaft*), although the
    latter is used increasingly often in discussions of digital cultures and
    has played an import­ant role, from the beginning, in conceptions of
    networking. Viewed analytically, however, "community" is a problematic
    term because it is almost hopelessly overloaded. Particularly in the
    German-speaking tradition, Ferdinand Tönnies\'s polar distinction
    between "community" (*Gemeinschaft*) and "society" (*Gesellschaft*),
    which he introduced in 1887, remains
    influential.[^40^](#c2-note-0040){#c2-note-0040a} Tönnies contrasted two
    fundamentally different and exclusive types of social relations. Whereas
    community is characterized by the overlapping multidimensional nature of
    social relationships, society is defined by the functional separation of
    its sectors and spheres. Community embeds every individual into complex
    social relationships, all of which tend to be simultaneously present. In
    the traditional village community ("communities of place," in Tönnies\'s
    terms), neighbors are involved with one another, for better or for
    worse, both on a familiar basis and economically or religiously. Every
    activity takes place on several different levels at the same time.
    Communities are comprehensive social institutions that penetrate all
    areas of life, endowing them with meaning. Through mutual dependency,
    they create stability and security, but they also obstruct change and
    hinder social mobility. Because everyone is connected with each other,
    no can leave his or her place without calling into question the
    arrangement as a whole. Communities are thus structurally conservative.
    Because every human activity is embedded in multifaceted social
    relationships, every change requires adjustments across the entire
    interrelational web -- a task that is not easy to accomplish.
    Accordingly, the trad­itional communities of the eighteenth and
    nineteenth centuries fiercely opposed the establishment of capitalist
    society. In order to impose the latter, the old community structures
    were broken apart with considerable violence. This is what Marx
    []{#Page_81 type="pagebreak" title="81"}and Engels were referring to in
    that famous passage from *The Communist Manifesto*: "All the settled,
    age-old relations with their train of time-honoured preconceptions and
    viewpoints are dissolved. \[...\] Everything feudal and fixed goes up in
    smoke, everything sacred is
    profaned."[^41^](#c2-note-0041){#c2-note-0041a}

    The defining feature of society, on the contrary, is that it frees the
    individual from such multifarious relationships. Society, according to
    Tönnies, separates its members from one another. Although they
    coordinate their activity with others, they do so in order to pursue
    partial, short-term, and personal goals. Not only are people separated,
    but so too are different areas of life. In a market-oriented society,
    for instance, the economy is conceptualized as an independent sphere. It
    can therefore break away from social connections to be organized simply
    by limited formal or legal obligations between actors who, beyond these
    obligations, have nothing else to do with one another. Costs or benefits
    that inadvertently affect people who are uninvolved in a given market
    transaction are referred to by economists as "externalities," and market
    participants do not need to care about these because they are strictly
    pursuing their own private interests. One of the consequences of this
    form of social relationship is a heightened social dynamic, for now it
    is possible to introduce changes into one area of life without
    considering its effects on other areas. In the end, the dissolution of
    mutual obligations, increased uncertainty, and the reduction of many
    social connections go hand in hand with what Marx and Engels referred to
    in *The Communist Manifesto* as "unfeeling hard cash."

    From this perspective, the historical development looks like an
    ambivalent process of modernization in which society (dynamic, but cold)
    is erected over the ruins of community (static, but warm). This is an
    unusual combination of romanticism and progress-oriented thinking, and
    the problems with this influential perspective are numerous. There is,
    first, the matter of its dichotomy; that is, its assumption that there
    can only be these two types of arrangement, community and society. Or
    there is the notion that the one form can be completely ousted by the
    other, even though aspects of community and aspects of society exist at
    the same time in specific historical situations, be it in harmony or in
    conflict.[^42^](#c2-note-0042){#c2-note-0042a} []{#Page_82
    type="pagebreak" title="82"}These impressions, however, which are so
    firmly associated with the German concept of *Gemeinschaft*, make it
    rather difficult to comprehend the new forms of communality that have
    developed in the offshoots of networked life. This is because, at least
    for now, these latter forms do not represent a genuine alternative to
    societal types of social
    connectedness.[^43^](#c2-note-0043){#c2-note-0043a} The English word
    "community" is somewhat more open. The opposition between community and
    society resonates with it as well, although the dichotomy is not as
    clear-cut. American communitarianism, for instance, considers the
    difference between community and society to be gradual and not
    categorical. Its primary aim is to strengthen civic institutions and
    mechanisms, and it regards community as an intermediary level between
    the individual and society.[^44^](#c2-note-0044){#c2-note-0044a} But
    there is a related English term, which seems even more productive for my
    purposes, namely "community of practice," a concept that is more firmly
    grounded in the empirical observation of concrete social relationships.
    The term was introduced at the beginning of the 1990s by the social
    researchers Jean Lave and Étienne Wenger. They observed that, in most
    cases, professional learning (for instance, in their case study of
    midwives) does not take place as a one-sided transfer of knowledge or
    proficiency, but rather as an open exchange, often outside of the formal
    learning environment, between people with different levels of knowledge
    and experience. In this sense, learning is an activity that, though
    distinguishable, cannot easily be separated from other "normal"
    activities of everyday life. As Lave and Wenger stress, however, the
    community of practice is not only a social space of exchange; it is
    rather, and much more fundamentally, "an intrinsic condition for the
    existence of knowledge, not least because it provides the interpretive
    support necessary for making sense of its
    heritage."[^45^](#c2-note-0045){#c2-note-0045a} Communities of practice
    are thus always epistemic communities that form around certain ways of
    looking at the world and one\'s own activity in it. What constitutes a
    community of practice is thus the joint acquisition, development, and
    preservation of a specific field of practice that contains abstract
    knowledge, concrete proficiencies, the necessary material and social
    resources, guidelines, expectations, and room to interpret one\'s own
    activity. All members are active participants in the constitution of
    this field, and this reinforces the stress on []{#Page_83
    type="pagebreak" title="83"}practice. Each of them, however, brings
    along different presuppositions and experiences, for their situations
    are embedded within numerous and specific situations of life or work.
    The processes within the community are mostly informal, and yet they are
    thoroughly structured, for authority is distributed unequally and is
    based on the extent to which the members value each other\'s (and their
    own) levels of knowledge and experience. At first glance, then, the term
    "community of practice" seems apt to describe the meaning-generating
    communal formations that are at issue here. It is also somewhat
    problematic, however, because, having since been subordinated to
    management strategies, its use is now narrowly applied to professional
    learning and managing knowledge.[^46^](#c2-note-0046){#c2-note-0046a}

    From these various notions of community, it is possible to develop the
    following way of looking at new types of communality: they are formed in
    a field of practice, characterized by informal yet structured exchange,
    focused on the generation of new ways of knowing and acting, and
    maintained through the reflexive interpretation of their own activity.
    This last point in particular -- the communal creation, preservation,
    and alteration of the interpretive framework in which actions,
    processes, and objects acquire a firm meaning and connection -- can be
    seen as the central role of communal formations.

    Communication is especially significant to them. Indi­viduals must
    continuously communicate in order to constitute themselves within the
    fields and practices, or else they will remain invisible. The mass of
    tweets, updates, emails, blogs, shared pictures, texts, posts on
    collaborative platforms, and databases (etc.) that are necessary for
    this can only be produced and processed by means of digital
    technologies. In this act of incessant communication, which is a
    constitutive element of social existence, the personal desire for
    self-constitution and orientation becomes enmeshed with the outward
    pressure of having to be present and available to form a new and binding
    set of requirements. This relation between inward motivation and outward
    pressure can vary highly, depending on the character of the communal
    formation and the position of the individual within it (although it is
    not the individual who determines what successful communication is, what
    represents a contribution to the communal formation, or in which form
    one has to be present). []{#Page_84 type="pagebreak" title="84"}Such
    decisions are made by other members of the formation in the form of
    positive or negative feedback (or none at all), and they are made with
    recourse to the interpretive framework that has been developed in
    common. These communal and continuous acts of learning, practicing, and
    orientation -- the exchange, that is, between "novices" and "experts" on
    the same field, be it concerned with internet politics, illegal street
    racing, extreme right-wing music, body modification, or a free
    encyclopedia -- serve to maintain the framework of shared meaning,
    expand the constituted field, recruit new members, and adapt the
    framework of interpretation and activity to changing conditions. Such
    communal formations constitute themselves; they preserve and modify
    themselves by constantly working out the foundations of their
    constitution. This may sound circular, for the process of reflexive
    self-constitution -- "autopoiesis" in the language of systems theory --
    is circular in the sense that control is maintained through continuous,
    self-generating feedback. Self-referentiality is a structural feature of
    these formations.
    :::

    ::: {.section}
    ### Singularity and communality {#c2-sec-0011}

    The new communal formations are informal forms of organ­ization that are
    based on voluntary action. No one is born into them, and no one
    possesses the authority to force anyone else to join or remain against
    his or her will, or to assign anyone with tasks that he or she might be
    unwilling to do. Such a formation is not an enclosed disciplinary
    institution in Foucault\'s sense,[^47^](#c2-note-0047){#c2-note-0047a}
    and, within it, power is not exercised through commands, as in the
    classical sense formulated by Max
    Weber.[^48^](#c2-note-0048){#c2-note-0048a} The condition of not being
    locked up and not being subordinated can, at least at first, represent
    for the individual a gain in freedom. Under a given set of conditions,
    everyone can (and must) choose which formations to participate in, and
    he or she, in doing so, will have a better or worse chance to influence
    the communal field of reference.

    On the everyday level of communicative self-constitution and creating a
    personal cognitive horizon -- in innumerable streams, updates, and
    timelines on social mass media -- the most important resource is the
    attention of others; that is, their feedback and the mutual recognition
    that results from it. []{#Page_85 type="pagebreak" title="85"}And this
    recognition may simply be in the form of a quickly clicked "like," which
    is the smallest unit that can assure the sender that, somewhere out
    there, there is a receiver. Without the latter, communication has no
    meaning. The situation is somewhat menacing if no one clicks the "like"
    button beneath a post or a photo. It is a sign that communication has
    broken, and the result is the dissolution of one\'s own communicatively
    constituted social existence. In this context, the boundaries are
    blurred between the categories of information, communication, and
    activity. Making information available always involves the active --
    that is, communicating -- person, and not only in the case of ubiquitous
    selfies, for in an overwhelming and chaotic environment, as discussed
    above, selection itself is of such central importance that the
    differences between the selected and the selecting become fluid,
    particularly when the goal of the latter is to experience confirmation
    from others. In this back-and-forth between one\'s own presence and the
    validation of others, one\'s own motives and those of the community are
    not in opposition but rather mutually depend on one another. Condensed
    to simple norms and to a basic set of guidelines within the context of
    an image-oriented social mass media service, the rule (or better:
    friendly tip) that one need not but probably ought to follow is this:

    ::: {.extract}
    Be an active member of the Instagram community to receive likes and
    comments. Take time to comment on a friend\'s photo, or to like photos.
    If you do this, others will reciprocate. If you never acknowledge your
    followers\' photos, then they won\'t acknowledge
    you.[^49^](#c2-note-0049){#c2-note-0049a}
    :::

    The context of this widespread and highly conventional piece of advice
    is not, for instance, a professional marketing campaign; it is simply
    about personally positioning oneself within a social network. The goal
    is to establish one\'s own, singular, identity. The process required to
    do so is not primarily inward-oriented; it is not based on questions
    such as: "Who am I really, apart from external influences?" It is rather
    outward-oriented. It takes place through making connections with others
    and is concerned with questions such as: "Who is in my network, and what
    is my position within it?" It is []{#Page_86 type="pagebreak"
    title="86"}revealing that none of the tips in the collection cited above
    offers advice about achieving success within a community of
    photographers; there are not suggestions, for instance, about how to
    take high-quality photographs. With smart cameras and built-in filters
    for post-production, this is not especially challenging any more,
    especially because individual pictures, to be examined closely and on
    their own terms, have become less important gauges of value than streams
    of images that are meant to be quickly scrolled through. Moreover, the
    function of the critic, who once monopolized the right to interpret and
    evaluate an image for everyone, is no longer of much significance.
    Instead, the quality of a picture is primarily judged according to
    whether "others like it"; that is, according to its performance in the
    ongoing popularity contest within a specific niche. But users do not
    rely on communal formations and the feedback they provide just for the
    sharing and evaluation of pictures. Rather, this dynamic has come to
    determine more and more facets of life. Users experience the
    constitution of singularity and communality, in which a person can be
    perceived as such, as simultaneous and reciprocal processes. A million
    times over and nearly subconsciously (because it is so commonplace),
    they engage in a relationship between the individual and others that no
    longer really corresponds to the liberal opposition between
    individuality and society, between personal and group identity. Instead
    of viewing themselves as exclusive entities (either in terms of the
    emphatic affirmation of individuality or its dissolution within a
    homogeneous group), the new formations require that the production of
    difference and commonality takes place
    simultaneously.[^50^](#c2-note-0050){#c2-note-0050a}
    :::

    ::: {.section}
    ### Authenticity and subjectivity {#c2-sec-0012}

    Because members have decided to participate voluntarily in the
    community, their expressions and actions are regarded as authentic, for
    it is implicitly assumed that, in making these gestures, they are not
    following anyone else\'s instructions but rather their own motivations.
    The individual does not act as a representative or functionary of an
    organization but rather as a private and singular (that is, unique)
    person. While at a gathering of the Occupy movement, a sure way to be
    kicked out to is to stick stubbornly to a party line, even if this way
    []{#Page_87 type="pagebreak" title="87"}of thinking happens to agree
    with that of the movement. Not only at Occupy gatherings, however, but
    in all new communal formations it is expected that everyone there is
    representing his or her own interests. As most people are aware, this
    assumption is theoretically naïve and often proves to be false in
    practice. Even spontaneity can be calculated, and in many cases it is.
    Nevertheless, the expectation of authenticity is relevant because it
    creates a minimum of trust. As the basis of social trust, such
    contra-factual expectations exist elsewhere as well. Critical readers of
    newspapers, for instance, must assume that what they are reading has
    been well researched and is presented as objectively as possible, even
    though they know that objectivity is theoretically a highly problematic
    concept -- to this extent, postmodern theory has become common knowledge
    -- and that newspapers often pursue (hidden) interests or lead
    campaigns. Yet without such contra-factual assumptions, the respective
    orders of knowledge and communication would not function, for they
    provide the normative framework within which deviations can be
    perceived, criticized, and sanctioned.

    In a seemingly traditional manner, the "authentic self" is formulated
    with reference to one\'s inner world, for instance to personal
    knowledge, interests, or desires. As the core of personality, however,
    this inner world no longer represents an immutable and essential
    characteristic but rather a temporary position. Today, even someone\'s
    radical reinvention can be regarded as authentic. This is the central
    difference from the classical, bourgeois conception of the subject. The
    self is no longer understood in essentialist terms but rather
    performatively. Accordingly, the main demand on the individual who
    voluntarily opts to participate in a communal formation is no longer to
    be self-aware but rather to be
    self-motivated.[^51^](#c2-note-0051){#c2-note-0051a} Nor is it necessary
    any more for one\'s core self to be coherent. It is not a contradiction
    to appear in various communal formations, each different from the next,
    as a different "I myself," for every formation is comprehensive, in that
    it appeals to the whole person, and simultaneously partial, in that it
    is oriented toward a particular goal and not toward all areas of life.
    As in the case of re-mixes and other referential processes, the concern
    here is not to preserve authenticity but rather to create it in the
    moment. The success or failure []{#Page_88 type="pagebreak"
    title="88"}of these efforts is determined by the continuous feedback of
    others -- one like after another.

    These practices have led to a modified form of subject constitution for
    which some sociologists, engaged in empir­ical research, have introduced
    the term "networked individualism."[^52^](#c2-note-0052){#c2-note-0052a}
    The idea is based on the observation that people in Western societies
    (the case studies were mostly in North America) are defining their
    identity less and less by their family, profession, or other stable
    collective, but rather increasingly in terms of their personal social
    networks; that is, according to the communal formations in which they
    are active as individuals and in which they are perceived as singular
    people. In this regard, individualization and atomization no longer
    necessarily go hand in hand. On the contrary, the intertwined nature of
    personal identity and communality can be experienced on an everyday
    level, given that both are continuously created, adapted, and affirmed
    by means of personal communication. This makes the networks in question
    simultaneously fragile and stable. Fragile because they require the
    ongoing presence of every individual and because communication can break
    down quickly. Stable because the networks of relationships that can
    support a single person -- as regards the number of those included,
    their geograph­ical distribution, and the duration of their cohesion --
    have expanded enormously by means of digital communication technologies.

    Here the issue is not that of close friendships, whose number remains
    relatively constant for most people and over long periods of
    time,[^53^](#c2-note-0053){#c2-note-0053a} but rather so-called "weak
    ties"; that is, more or less loose acquaintances that can be tapped for
    new information and resources that do not exist within one\'s close
    circle of friends.[^54^](#c2-note-0054){#c2-note-0054a} The more they
    are expanded, the more sustainable and valuable these networks become,
    for they bring together a large number of people and thus multiply the
    material and organizational resources that are (potentially) accessible
    to the individual. It is impossible to make a sweeping statement as to
    whether these formations actually represent communities in a
    comprehensive sense and how stable they really are, especially in times
    of crisis, for this is something that can only be found out on a
    case-by-case basis. It is relevant that the development of personal
    networks []{#Page_89 type="pagebreak" title="89"}has not taken place in
    a vacuum. The disintegration of institutions that were formerly
    influential in the formation of identity and meaning began long before
    the large-scale spread of networks. For most people, there is no other
    choice but to attempt to orient and organize oneself, regardless of how
    provisional or uncertain this may be. Or, as Manuel Castells somewhat
    melodramatically put it, "At the turn of the millennium, the king and
    the queen, the state and civil society, are both naked, and their
    children-citizens are wandering around a variety of foster
    homes."[^55^](#c2-note-0055){#c2-note-0055a}
    :::

    ::: {.section}
    ### Space and time as a communal practice {#c2-sec-0013}

    Although participation in a communal formation is voluntary, it is not
    unselfish. Quite the contrary: an important motivation is to gain access
    to a formation\'s constitutive field of practice and to the resources
    associated with it. A communal formation ultimately does more than
    simply steer the attention of its members toward one another. Through
    the common production of culture, it also structures how the members
    perceive the world and how they are able to design themselves and their
    potential actions in it. It is thus a co­operative mechanism of
    filtering, interpretation, and constitution. Through the everyday
    referential work of its members, the community selects a manageable
    amount of information from the excess of potentially available
    information and brings it into a meaningful context, whereby it
    validates the selection itself and orients the activity of each of its
    members.

    The new communal formations consist of self-referential worlds whose
    constructive common practice affects the foundations of social activity
    itself -- the constitution of space and time. How? The spatio-temporal
    horizon of digital communication is a global (that is, placeless) and
    ongoing present. The technical vision of digital communication is always
    the here and now. With the instant transmission of information,
    everything that is not "here" is inaccessible and everything that is not
    "now" has disappeared. Powerful infrastructure has been built to achieve
    these effects: data centers, intercontinental networks of cables,
    satellites, high-performance nodes, and much more. Through globalized
    high-frequency trading, actors in the financial markets have realized
    this []{#Page_90 type="pagebreak" title="90"}technical vision to its
    broadest extent by creating a never-ending global present whose expanse
    is confined to milliseconds. This process is far from coming to an end,
    for massive amounts of investment are allocated to accomplish even the
    smallest steps toward this goal. On November 3, 2015, a 4,600-kilometer,
    300-million-dollar transatlantic telecommunications cable (Hibernia
    Express) was put into operation between London and New York -- the first
    in more than 10 years -- with the single goal of accelerating automated
    trading between the two places by 5.2 milliseconds.

    For social and biological processes, this technical horizon of space and
    time is neither achievable nor desirable. Such processes, on the
    contrary, are existentially dependent on other spatial and temporal
    orders. Yet because of the existence of this non-geographical and
    atemporal horizon, the need -- as well as the possibility -- has arisen
    to redefine the parameters of space and time themselves in order to
    counteract the mire of technically defined spacelessness and
    timelessness. If space and time are not simply to vanish in this
    spaceless, ongoing present, how then should they be defined? Communal
    formations create spaces for action not least by determining their own
    geographies and temporal rhythms. They negotiate what is near and far
    and also which places are disregarded (that is, not even perceived). If
    every place is communicatively (and physically) reachable, every person
    must decide which place he or she would like to reach in practice. This,
    however, is not an individual decision but rather a task that can only
    be approached collectively. Those places which are important and thus
    near are determined by communal formations. This takes place in the form
    of a rough consensus through the blogs that "one" has to read, the
    exhibits that "one" has to see, the events and conferences that "one"
    has to attend, the places that "one" has to visit before they are
    overrun by tourists, the crises in which "the West" has to intervene,
    the targets that "lend themselves" to a terrorist attack, and so on. On
    its own, however, selection is not enough. Communal formations are
    especially powerful when they generate the material and organizational
    resources that are necessary for their members to implement their shared
    worldview through actions -- to visit, for instance, the places that
    have been chosen as important. This can happen if they enable access
    []{#Page_91 type="pagebreak" title="91"}to stipends, donations, price
    reductions, ride shares, places to stay, tips, links, insider knowledge,
    public funds, airlifts, explosives, and so on. It is in this way that
    each formation creates its respective spatial constructs, which define
    distances in a great variety of ways. At the same time that war-torn
    Syria is unreachably distant even for seasoned reporters and their
    staff, veritable travel agencies are being set up in order to bring
    Western jihadists there in large numbers.

    Things are similar for the temporal dimensions of social and biological
    processes. Permanent presence is a temporality that is inimical to life
    but, under its influence, temporal rhythms have to be redefined as well.
    What counts as fast? What counts as slow? In what order should things
    proceed? On the everyday level, for instance, the matter can be as
    simple as how quickly to respond to an email. Because the transmission
    of information hardly takes any time, every delay is a purely social
    creation. But how much is acceptable? There can be no uniform answer to
    this. The members of each communal formation have to negotiate their own
    rules with one another, even in areas of life that are otherwise highly
    formalized. In an interview with the magazine *Zeit*, for instance, a
    lawyer with expertise in labor law was asked whether a boss may require
    employees to be reachable at all times. Instead of answering by
    referring to any binding legal standards, the lawyer casually advised
    that this was a matter of flexible negotiation: "Express your misgivings
    openly and honestly about having to be reachable after hours and,
    together with your boss, come up with an agreeable rule to
    follow."[^56^](#c2-note-0056){#c2-note-0056a} If only it were that easy.

    Temporalities that, in many areas, were once simply taken for granted by
    everyone on account of the factuality of things now have to be
    culturally determined -- that is, explicitly negotiated -- in a greater
    number of contexts. Under the conditions of capitalism, which is always
    creating new competitions and incentives, one consequence is the
    often-lamented "acceleration of time." We are asked to produce, consume,
    or accomplish more and more in less and less
    time.[^57^](#c2-note-0057){#c2-note-0057a} This change in the
    structuring of time is not limited to linear acceleration. It reaches
    deep into the foundations of life and has even reconfigured biological
    processes themselves. Today there is an entire industry that specializes
    in freezing the stem []{#Page_92 type="pagebreak" title="92"}cells of
    newborns in liquid nitrogen -- that is, in suspending cellular
    biological time -- in case they might be needed later on in life for a
    transplant or for the creation of artificial organs. Children can be
    born even if their physical mothers are already dead. Or they can be
    "produced" from ova that have been stored for many years at minus 196
    degrees.[^58^](#c2-note-0058){#c2-note-0058a} At the same time,
    questions now have to be addressed every day whose grand temporal
    dimensions were once the matter of myth. In the case of atomic energy,
    for instance, there is the issue of permanent disposal. Where can we
    deposit nuclear waste for the next hundred thousand years without it
    causing catastrophic damage? How can the radioactive material even be
    transported there, wherever that is, within the framework of everday
    traffic laws?[^59^](#c2-note-0059){#c2-note-0059a}

    The construction of temporal dimensions and sequences has thus become an
    everyday cultural question. Whereas throughout Europe, for example,
    committees of experts and ethicists still meet to discuss reproductive
    medicine and offer their various recommendations, many couples are
    concerned with the specific question of whether or how they can fulfill
    their wish to have children. Without a coherent set of rules, questions
    such as these have to be answered by each individual with recourse to
    his or her personally relevant communal formation. If there is no
    cultural framework that at least claims to be binding for everyone, then
    the individual must negotiate independently within each communal
    formation with the goal of acquiring the resources necessary to act
    according to communal values and objectives.
    :::

    ::: {.section}
    ### Self-generating orders {#c2-sec-0014}

    These three functions -- selection, interpretation, and the constitutive
    ability to act -- make communal formations the true subject of the
    digital condition. In principle, these functions are nothing new;
    rather, they are typical of fields that are organized without reference
    to external or irrefutable authorities. The state of scholarship, for
    instance, is determined by what is circulated in refereed publications.
    In this case, "refereed" means that scientists at the same professional
    rank mutually evaluate each other\'s work. The scientific community (or
    better: the sub-community of a specialized discourse) []{#Page_93
    type="pagebreak" title="93"}evaluates the contributions of individual
    scholars. They decide what should be considered valuable, and this
    consensus can theoretically be revised at any time. It is based on a
    particular catalog of criteria, on an interpretive framework that
    provides lines of inquiry, methods, appraisals, and conventions of
    presentation. With every article, this framework is confirmed and
    reconstituted. If the framework changes, this can lead in the most
    extreme case to a paradigm shift, which overturns fundamental
    orientations, assumptions, and
    certainties.[^60^](#c2-note-0060){#c2-note-0060a} The result of this is
    not only a change in how scientific contributions are evaluated but also
    a change in how the external world is perceived and what activities are
    possible in it. Precisely because the sciences claim to define
    themselves, they have the ability to revise their own foundations.

    The sciences were the first large sphere of society to achieve
    comprehensive cultural autonomy; that is, the ability to determine its
    own binding meaning. Art was the second that began to organize itself on
    the basis of internal feedback. It was during the era of Romanticism
    that artists first laid claim to autonomy. They demanded "to absolve art
    from all conditions, to represent it as a realm -- indeed as the only
    realm -- in which truth and beauty are expressed in their pure form, a
    realm in which everything truly human is
    transcended."[^61^](#c2-note-0061){#c2-note-0061a} With the spread of
    photography in the second half of the nineteenth century, art also
    liberated itself from its final task, which was hoisted upon it from the
    outside, namely the need to represent external reality. Instead of
    having to represent the external world, artists could now focus on their
    own subjectivity. This gave rise to a radical individualism, which found
    its clearest summation in Marcel Duchamp\'s assertion that only the
    artist could determine what is art. This he claimed in 1917 by way of
    explaining how an industrially produced urinal, exhibited as a signed
    piece with the title "Fountain," could be considered a work of art.

    With the rise of the knowledge economy and the expansion of cultural
    fields, including the field of art and the artists active within it,
    this individualism quickly swelled to unmanageable levels. As a
    consequence, the task of defining what should be regarded as art shifted
    from the individual artist to the curator. It now fell upon the latter
    to select a few works from the surplus of competing scenes and thus
    bring temporary []{#Page_94 type="pagebreak" title="94"}order to the
    constantly diversifying and changing world of contemporary art. This
    order was then given expression in the form of exhibits, which were
    intended to be more than the sum of their parts. The beginning of this
    practice can be traced to the 1969 exhibition When Attitudes Become
    Form, which was curated by Harald Szeemann for the Kunsthalle Bern (it
    was also sponsored by Philip Morris). The works were not neatly
    separated from one another and presented without reference to their
    environment, but were connected with each other both spatially and in
    terms of their content. The effect of the exhibition could be felt at
    least as much through the collection of works as a whole as it could
    through the individual pieces, many of which had been specially
    commissioned for the exhibition itself. It not only cemented Szeemann\'s
    reputation as one of the most significant curators of the twentieth
    century; it also completely redefined the function of the curator as a
    central figure within the art system.

    This was more than 40 years ago and in a system that functioned
    differently from that of today. The distance from this exhibition, but
    also its ongoing relevance, was negotiated, significantly, in a
    re-enactment at the 2013 Biennale in Venice. For this, the old rooms at
    the Kunsthalle Bern were reconstructed in the space of the Fondazione
    Prada in such a way that both could be seen simultaneously. As is
    typical with such re-enactments, the curators of the project described
    its goals in terms of appropriation and distancing: "This was the
    challenge: how could we find and communicate a limit to a non-limit,
    creating a place that would reflect exactly the architectural structures
    of the Kunsthalle, but also an asymmetrical space with respect to our
    time and imbued with an energy and tension equivalent to that felt at
    Bern?"[^62^](#c2-note-0062){#c2-note-0062a}

    Curation -- that is, selecting works and associating them with one
    another -- has become an omnipresent practice in the art system. No
    exhibition takes place any more without a curator. Nevertheless,
    curators have lost their extraordinary
    position,[^63^](#c2-note-0063){#c2-note-0063a} with artists taking on
    more of this work themselves, not only because the boundaries between
    artistic and curatorial activities have become fluid but also because
    many artists explicitly co-produce the context of their work by
    incorporating a multitude of references into their pieces. It is with
    precisely this in mind that André Rottmann, in the []{#Page_95
    type="pagebreak" title="95"}quotation cited at the beginning of this
    chapter, can assert that referentiality has become the dominant
    production-aesthetic model in contemporary art. This practice enables
    artists to objectify themselves by explicitly placing themselves into a
    historical and social context. At the same time, it also enables them to
    subjectify the historical and social context by taking the liberty to
    select and arrange the references
    themselves.[^64^](#c2-note-0064){#c2-note-0064a}

    Such strategies are no longer specific to art. Self-generated spaces of
    reference and agency are now deeply embedded in everyday life. The
    reason for this is that a growing number of questions can no longer be
    answered in a generally binding way (such as those about what
    constitutes fine art), while the enormous expansion of the cultural
    requires explicit decisions to be made in more aspects of life. The
    reaction to this dilemma has been radical subjectivation. This has not,
    however, been taking place at the level of the individual but rather at
    that of communal formations. There is now a patchwork of answers to
    large questions and a multitude of reactions to large challenges, all of
    which are limited in terms of their reliability and scope.
    :::

    ::: {.section}
    ### Ambivalent voluntariness {#c2-sec-0015}

    Even though participation in new formations is voluntary and serves the
    interests of their members, it is not without preconditions. The most
    important of these is acceptance, the willing adoption of the
    interpretive framework that is generated by the communal formation. The
    latter is formed from the social, cultural, legal, and technical
    protocols that lend to each of these formations its concrete
    constitution and specific character. Protocols are common sets of rules;
    they establish, according to the network theorist Alexander Galloway,
    "the essential points necessary to enact an agreed-upon standard of
    action." They provide, he goes on, "etiquette for autonomous
    agents."[^65^](#c2-note-0065){#c2-note-0065a} Protocols are
    simul­taneously voluntary and binding; they allow actors to meet
    eye-to-eye instead of entering into hierarchical relations with one
    another. If everyone voluntarily complies with the protocols, then it is
    not necessary for one actor to give instructions to another. Whoever
    accepts the relevant protocols can interact with others who do the same;
    whoever opts not to []{#Page_96 type="pagebreak" title="96"}accept them
    will remain on the outside. Protocols establish, for example, common
    languages, technical standards, or social conventions. The fundamental
    protocol for the internet is the Transmission Control Protocol/Internet
    Protocol (TCP/IP). This suite of protocols defines the common language
    for exchanging data. Every device that exchanges information over the
    internet -- be it a smartphone, a supercomputer in a data center, or a
    networked thermostat -- has to use these protocols. In growing areas of
    social contexts, the common language is English. Whoever wishes to
    belong has to speak it increasingly often. In the natural sciences,
    communication now takes place almost exclusively in English. Non-native
    speakers who accept this norm may pay a high price: they have to learn a
    new language and continually improve their command of it or else resign
    themselves to being unable to articulate things as they would like --
    not to mention losing the possibility of expressing something for which
    another language would perhaps be more suitable, or forfeiting
    trad­itions that cannot be expressed in English. But those who refuse to
    go along with these norms pay an even higher price, risking
    self-marginalization. Those who "voluntarily" accept conventions gain
    access to a field of practice, even though within this field they may be
    structurally disadvantaged. But unwillingness to accept such
    conventions, with subsequent denial of access to this field, might have
    even greater disadvantages.[^66^](#c2-note-0066){#c2-note-0066a}

    In everyday life, the factors involved with this trade-off are often
    presented in the form of subtle cultural codes. For instance, in order
    to participate in a project devoted to the development of free software,
    it is not enough for someone to possess the necessary technical
    knowledge; he or she must also be able to fit into a wide-ranging
    informal culture with a characteristic style of expression, humor, and
    preferences. Ultimately, software developers do not form a professional
    corps in the traditional sense -- in which functionaries meet one
    another in the narrow and regulated domain of their profession -- but
    rather a communal formation in which the engagement of the whole person,
    both one\'s professional and social self, is scrutinized. The
    abolishment of the separ­ation between different spheres of life,
    requiring interaction of a more holistic nature, is in fact a key
    attraction of []{#Page_97 type="pagebreak" title="97"}these communal
    formations and is experienced by some as a genuine gain in freedom. In
    this situation, one is no longer subjected to rules imposed from above
    but rather one is allowed to -- and indeed ought to -- be authentically
    pursuing his or her own interests.

    But for others the experience can be quite the opposite because the
    informality of the communal formation also allows forms of exclusion and
    discrimination that are no longer acceptable in formally organized
    realms of society. Discrimination is more difficult to identify when it
    takes place within the framework of voluntary togetherness, for no one
    is forced to participate. If you feel uncomfortable or unwelcome, you
    are free to leave at any time. But this is a specious argument. The
    areas of free software or Wikipedia are difficult places for women. In
    these clubby atmospheres of informality, they are often faced with
    blatant sexism, and this is one of the reasons why many women choose to
    stay away from such projects.[^67^](#c2-note-0067){#c2-note-0067a} In
    2007, according to estimates by the American National Center for Women &
    Information Technology, whereas approximately 27 percent of all jobs
    related to computer science were held by women, their representation at
    the same time was far lower in the field of free software -- on average
    less than 2 percent. And for years, the proportion of women who edit
    texts on Wikipedia has hovered at around 10
    percent.[^68^](#c2-note-0068){#c2-note-0068a}

    The consequences of such widespread, informal, and elusive
    discrimination are not limited to the fact that certain values and
    prejudices of the shared culture are included in these products, while
    different viewpoints and areas of knowledge are
    excluded.[^69^](#c2-note-0069){#c2-note-0069a} What is more, those who
    are excluded or do not wish to expose themselves to discrimination (and
    thus do not even bother to participate in any communal formations) do
    not receive access to the resources that circulate there (attention and
    support, valuable and timely knowledge, or job offers). Many people are
    thus faced with the choice of either enduring the discrimination within
    a community or remaining on the outside and thus invisible. That this
    decision is made on a voluntary basis and on one\'s own responsibility
    hardly mitigates the coercive nature of the situation. There may be a
    choice, but it would be misleading to call it a free one.[]{#Page_98
    type="pagebreak" title="98"}
    :::

    ::: {.section}
    ### The power of sociability {#c2-sec-0016}

    In order to explain the peculiar coercive nature of the (nom­inally)
    voluntary acceptance of protocols, rules, and norms, the political
    scientist David Singh Grewal, drawing on the work of Max Weber and
    Michel Foucault, has distinguished between the "power of sovereignty"
    and the "power of sociabil­ity."[^70^](#c2-note-0070){#c2-note-0070a}
    The former develops on the basis of dominance and subordination, as
    imposed by authorities, police officers, judges, or other figures within
    formal hierarchies. Their power is anchored in disciplinary
    institutions, and the dictum of this sort of power is: "You must!" The
    power of sociability, on the contrary, functions by prescribing the
    conditions or protocols under which people are able to enter into an
    exchange with one another. The dictum of this sort of power is: "You
    can!" The more people accept certain protocols and standards, the more
    powerful these become. Accordingly, the sociability that they structure
    also becomes more comprehensive, and those not yet involved have to ask
    themselves all the more urgently whether they can afford not to accept
    these protocols and standards. Whereas the first type of power is
    ultimately based on the monopoly of violence and on repression, the
    second is founded on voluntary submission. When the entire internet
    speaks TCP/IP, then an individual\'s decision to use it may be voluntary
    in nominal terms, but at the same time it is an indispensable
    precondition for existing within the network at all. Protocols exert
    power without there having to be anyone present to possess the power in
    question. Whereas the sovereign can be located, the effects of
    sociability\'s power are diffuse and omnipresent. They are not
    repressive but rather constitutive. No one forces a scientist to publish
    in English or a woman editor to tolerate disparaging remarks on
    Wikipedia. People accept these often implicit behavioral norms (sexist
    comments are permitted, for instance) out of their own interests in
    order to acquire access to the resources circulating within the networks
    and to constitute themselves within it. In this regard, Singh
    distinguishes between the "intrinsic" and "extrinsic" reasons for
    abiding by certain protocols.[^71^](#c2-note-0071){#c2-note-0071a} In
    the first case, the motivation is based on a new protocol being better
    suited than existing protocols for carrying out []{#Page_99
    type="pagebreak" title="99"}a specific objective. People thus submit
    themselves to certain rules because they are especially efficient,
    transparent, or easy to use. In the second case, a protocol is accepted
    not because but in spite of its features. It is simply a precondition
    for gaining access to a space of agency in which resources and
    opportunities are available that cannot be found anywhere else. In the
    first case, it is possible to speak subjectively of voluntariness,
    whereas the second involves some experience of impersonal compunction.
    One is forced to do something that might potentially entail grave
    disadvantages in order to have access, at least, to another level of
    opportunities or to create other advantages for oneself.
    :::

    ::: {.section}
    ### Homogeneity, difference and authority {#c2-sec-0017}

    Protocols are present on more than a technical level; as interpretive
    frameworks, they structure viewpoints, rules, and patterns of behavior
    on all levels. Thus, they provide a degree of cultural homogeneity, a
    set of commonalities that lend these new formations their communal
    nature. Viewed from the outside, these formations therefore seem
    inclined toward consensus and uniformity, for their members have already
    accepted and internalized certain aspects in common -- the protocols
    that enable exchange itself -- whereas everyone on the outside has not
    done so. When everyone is speaking in English, the conversation sounds
    quite monotonous to someone who does not speak the language.

    Viewed from the inside, the experience is something different: in order
    to constitute oneself within a communal formation, not only does one
    have to accept its rules voluntarily and in a self-motivated manner; one
    also has to make contributions to the reproduction and development of
    the field. Everyone is urged to contribute something; that is, to
    produce, on the basis of commonalities, differences that simultaneously
    affirm, modify, and enhance these commonalities. This leads to a
    pronounced and occasionally highly competitive internal differentiation
    that can only be understood, however, by someone who has accepted the
    commonalities. To an outsider, this differentiation will seem
    irrelevant. Whoever is not well versed in the universe of *Star Wars*
    will not understand why the various character interpretations at
    []{#Page_100 type="pagebreak" title="100"}cosplay conventions, which I
    discussed above, might be brilliant or even controversial. To such a
    person, they will all seem equally boring and superficial.

    These formations structure themselves internally through the production
    of differences; that is, by constantly changing their common ground.
    Those who are able to add many novel aspects to the common resources
    gain a degree of authority. They assume central positions and they
    influence, through their behavior, the development of the field more
    than others do. However, their authority, influence, and de facto power
    are not based on any means of coercion. As Niklas Luhmann noted, "In the
    end, one participant\'s achievements in making selections \[...\] are
    accepted by another participant \[...\] as a limitation of the latter\'s
    potential experiences and activities without him having to make the
    selection on his own."[^72^](#c2-note-0072){#c2-note-0072a} Even this is
    a voluntary and self-interested act: the members of the formation
    recognize that this person has contributed more to the common field and
    to the resources within it. This, in turn, is to everyone\'s advantage,
    for each member would ultimately like to make use of the field\'s
    resources to achieve his or her own goals. This arrangement, which can
    certainly take on hierarchical qualities, is experienced as something
    meritocratically legitimized and voluntarily
    accepted.[^73^](#c2-note-0073){#c2-note-0073a} In the context of free
    software, there has therefore been some discussion of "benevolent
    dictators."[^74^](#c2-note-0074){#c2-note-0074a} The matter of
    "dictators" is raised because projects are often led by charismatic
    figures without a formal mandate. They are "benevolent" because their
    pos­ition of authority is based on the fact that a critical mass of
    participating producers has voluntarily subordinated itself for its own
    self-interest. If the consensus breaks over whose contributions have
    been carrying the most weight, then the formation will be at risk of
    losing its internal structure and splitting apart ("forking," in the
    jargon of free software).
    :::
    :::

    ::: {.section}
    Algorithmicity {#c2-sec-0018}
    --------------

    Through personal communication, referential processes in communal
    formations create cultural zones of various sizes and scopes. They
    expand into the empty spaces that have been created by the erosion of
    established institutions and []{#Page_101 type="pagebreak"
    title="101"}processes, and once these new processes have been
    established the process of erosion intensifies. Multiple processes of
    exchange take place alongside one another, creating a patchwork of
    interconnected, competing, or entirely unrelated spheres of meaning,
    each with specific goals and resources and its own preconditions and
    potentials. The structures of knowledge, order, and activity that are
    generated by this are holistic as well as partial and limited. The
    participants in such structures are simultaneously addressed on many
    levels that were once functionally separated; previously independent
    spheres, such as work and leisure, are now mixed together, but usually
    only with respect to the subdivisions of one\'s own life. And, at first,
    the structures established in this way are binding only for active
    participants.

    ::: {.section}
    ### Exiting the "Library of Babel" {#c2-sec-0019}

    For one person alone, however, these new processes would not be able to
    generate more than a local island of meaning from the enormous clamor of
    chaotic spheres of information. In his 1941 story "The Library of
    Babel," Jorge Luis Borges fashioned a fitting image for such a
    situation. He depicts the world as a library of unfathomable and
    possibly infinite magnitude. The characters in the story do not know
    whether there is a world outside of the library. There are reasons to
    believe that there is, and reasons that suggest otherwise. The library
    houses the complete collection of all possible books that can be written
    on exactly 410 pages. Contained in these volumes is the promise that
    there is "no personal or universal problem whose eloquent solution
    \[does\] not exist," for every possible combination of letters, and thus
    also every possible pronouncement, is recorded in one book or another.
    No catalog has yet been found for the library (though it must exist
    somewhere), and it is impossible to identify any order in its
    arrangement of books. The "men of the library," according to Borges,
    wander round in search of the one book that explains everything, but
    their actual discoveries are far more modest. Only once in a while are
    books found that contain more than haphazard combinations of signs. Even
    small regularities within excerpts of texts are heralded as sensational
    discoveries, and it is around these discoveries that competing
    []{#Page_102 type="pagebreak" title="102"}schools of interpretation
    develop. Despite much labor and effort, however, the knowledge gained is
    minimal and fragmentary, so the prevailing attitude in the library is
    bleak. By the time of the narrator\'s generation, "nobody expects to
    discover anything."[^75^](#c2-note-0075){#c2-note-0075a}

    Although this vision has now been achieved from a quantitative
    perspective -- no one can survey the "library" of digital information,
    which in practical terms is infinitely large, and all of the growth
    curves continue to climb steeply -- today\'s cultural reality is
    nevertheless entirely different from that described by Borges. Our
    ability to deal with massive amounts of data has radically improved, and
    thus our faith in the utility of information is not only unbroken but
    rather gaining strength. What is new is precisely such large quantities
    of data ("big data"), which, as we are promised or forewarned, will lead
    to new knowledge, to a comprehensive understanding of the world, indeed
    even to "omniscience."[^76^](#c2-note-0076){#c2-note-0076a} This faith
    in data is based above all on the fact that the two processes described
    above -- referentiality and communality -- are not the only new
    mechanisms for filtering, sorting, aggregating, and evaluating things.
    Beneath or ahead of the social mechanisms of decentralized and networked
    cultural production, there are algorithmic processes that pre-sort the
    immeasurably large volumes of data and convert them into a format that
    can be apprehended by individuals, evaluated by communities, and
    invested with meaning.

    Strictly speaking, it is impossible to maintain a categorical
    distinction between social processes that take place in and by means of
    technological infrastructures and technical pro­cesses that are socially
    constructed. In both cases, social actors attempt to realize their own
    interests with the resources at their disposal. The methods of
    (attempted) realization, the available resources, and the formulation of
    interests mutually influence one another. The technological resources
    are inscribed in the formulation of goals. These open up fields of
    imagination and desire, which in turn inspire technical
    development.[^77^](#c2-note-0077){#c2-note-0077a} Although it is
    impossible to draw clear theoretical lines, the attempt to make such a
    distinction can nevertheless be productive in practice, for in this way
    it is possible to gain different perspectives about the same object of
    investigation.[]{#Page_103 type="pagebreak" title="103"}
    :::

    ::: {.section}
    ### The rise of algorithms {#c2-sec-0020}

    An algorithm is a set of instructions for converting a given input into
    a desired output by means of a finite number of steps: algorithms are
    used to solve predefined problems. For a set of instructions to become
    an algorithm, it has to be determined in three different respects.
    First, the necessary steps -- individually and as a whole -- have to be
    described unambiguously and completely. To do this, it is usually
    neces­sary to use a formal language, such as mathematics, or a
    programming language, in order to avoid the characteristic imprecision
    and ambiguity of natural language and to ensure instructions can be
    followed without interpretation. Second, it must be possible in practice
    to execute the individual steps together. For this reason, every
    algorithm is tied to the context of its realization. If the context
    changes, so do the operating processes that can be formalized as
    algorithms and thus also the ways in which algorithms can partake in the
    constitution of the world. Third, it must be possible to execute an
    operating instruction mechanically so that, under fixed conditions, it
    always produces the same result.

    Defined in such general terms, it would also be possible to understand
    the instruction manual for a typical piece of Ikea furniture as an
    algorithm. It is a set of instructions for creating, with a finite
    number of steps, a specific and predefined piece of furniture (output)
    from a box full of individual components (input). The instructions are
    composed in a formal language, pictograms, which define each step as
    unambiguously as possible, and they can be executed by a single person
    with simple tools. The process can be repeated, for the final result is
    always the same: a Billy box will always yield a Billy shelf. In this
    case, a person takes over the role of a machine, which (unambiguous
    pictograms aside) can lead to problems, be it that scratches and other
    traces on the finished piece of furniture testify to the unique nature
    of the (unsuccessful) execution, or that, inspired by the micro-trend of
    "Ikea hacking," the official instructions are intentionally ignored.

    Because such imprecision is supposed to be avoided, the most important
    domain of algorithms in practice is mathematics and its implementation
    on the computer. The term []{#Page_104 type="pagebreak"
    title="104"}"algorithm" derives from the Persian mathematician,
    astronomer, and geographer Muḥammad ibn Mūsā al-Khwārizmī. His book *On
    the Calculation with Hindu Numerals*, which was written in Baghdad in
    825, was known widely in the Western Middle Ages through a Latin
    translation and made the essential contribution of introducing
    Indo-Arabic nu­merals and the number zero to Europe. The work begins
    with the formula *dixit algorizmi* ... ("Algorismi said ..."). During
    the Middle Ages, *algorizmi* or *algorithmi* soon became a general term
    for advanced methods of
    calculation.[^78^](#c2-note-0078){#c2-note-0078a}

    The modern effort to build machines that could mechanic­ally carry out
    instructions achieved its first breakthrough with Gottfried Wilhelm
    Leibniz. He has often been credited with making the following remark:
    "It is unworthy of excellent men to lose hours like slaves in the labour
    of calculation which could be done by any peasant with the aid of a
    machine."[^79^](#c2-note-0079){#c2-note-0079a} This vision already
    contains a distinction between higher cognitive and interpretive
    activities, which are regarded as being truly human, and lower processes
    that involve pure execution and can therefore be mechanized. To this
    end, Leibniz himself developed the first calculating machine, which
    could carry out all four of the basic types of arithmetic. He was not
    motivated to do this by the practical necessities of production and
    business (although conceptually groundbreaking, Leibniz\'s calculating
    machine remained, on account of its mechanical complexity, a unique item
    and was never used).[^80^](#c2-note-0080){#c2-note-0080a} In the
    estimation of the philosopher Sybille Krämer, calculating machines "were
    rather speculative masterpieces of a century that, like none before it,
    was infatuated by the idea of mechanizing 'intellectual'
    processes."[^81^](#c2-note-0081){#c2-note-0081a} Long before machines
    were implemented on a large scale to increase the efficiency of material
    production, Leibniz had already speculated about using them to enhance
    intellectual labor. And this vision has never since disappeared. Around
    a century and a half later, the English polymath Charles Babbage
    formulated it anew, now in direct connection with industrial
    mechanization and its imperative of time-saving
    efficiency.[^82^](#c2-note-0082){#c2-note-0082a} Yet he, too, failed to
    overcome the problem of practically realizing such a machine.

    The decisive step that turned the vision of calculating machines into
    reality was made by Alan Turing in 1937. With []{#Page_105
    type="pagebreak" title="105"}a theoretical model, he demonstrated that
    every algorithm could be executed by a machine as long as it could read
    an incremental set of signs, manipulate them according to established
    rules, and then write them out again. The validity of his model did not
    depend on whether the machine would be analog or digital, mechanical or
    electronic, for the rules of manipulation were not at first conceived as
    being a fixed component of the machine itself (that is, as being
    implemented in its hardware). The electronic and digital approach came
    to be preferred because it was hoped that even the instructions could be
    read by the machine itself, so that the machine would be able to execute
    not only one but (theoretically) every written algorithm. The
    Hungarian-born mathematician John von Neumann made it his goal to
    implement this idea. In 1945, he published a model in which the program
    (the algorithm) and the data (the input and output) were housed in a
    common storage device. Thus, both could be manipulated simultaneously
    without having to change the hardware. In this way, he converted the
    "Turing machine" into the "universal Turing machine"; that is, the
    modern computer.[^83^](#c2-note-0083){#c2-note-0083a}

    Gordon Moore, the co-founder of the chip manufacturer Intel,
    prognosticated 20 years later that the complexity of integrated circuits
    and thus the processing power of computer chips would double every 18 to
    24 months. Since the 1970s, his prediction has been known as Moore\'s
    Law and has essentially been correct. This technical development has
    indeed taken place exponentially, not least because the semi-conductor
    industry has been oriented around
    it.[^84^](#c2-note-0084){#c2-note-0084a} An IBM 360/40 mainframe
    computer, which was one of the first of its kind to be produced on a
    large scale, could make approximately 40,000 calculations per second and
    its cost, when it was introduced to the market in 1965, was \$1.5
    million per unit. Just 40 years later, a standard server (with a
    quad-core Intel processor) could make more than 40 billion calculations
    per second, and this at a price of little more than \$1,500. This
    amounts to an increase in performance by a factor of a million and a
    corresponding price reduction by a factor of a thousand; that is, an
    improvement in the price-to-performance ratio by a factor of a billion.
    With inflation taken into consideration, this factor would be even
    higher. No less dramatic were the increases in performance -- or rather
    []{#Page_106 type="pagebreak" title="106"}the price reductions -- in the
    area of data storage. In 1980, it cost more than \$400,000 to store a
    gigabyte of data, whereas 30 years later it would cost just 10 cents to
    do the same -- a price reduction by a factor of 4 million. And in both
    areas, this development has continued without pause.

    These increases in performance have formed the material basis for the
    rapidly growing number of activities carried out by means of algorithms.
    We have now reached a point where Leibniz\'s distinction between
    creative mental functions and "simple calculations" is becoming
    increasingly fuzzy. Recent discussions about the allegedly threatening
    "domination of the computer" have been kindled less by the increased use
    of algorithms as such than by the gradual blurring of this distinction
    with new possibilities to formalize and mechanize increasing areas of
    creative thinking.[^85^](#c2-note-0085){#c2-note-0085a} Activities that
    not long ago were reserved for human intelligence, such as composing
    texts or analyzing the content of images, are now frequently done by
    machines. As early as 2010, a program called Stats Monkey was introduced
    to produce short reports about baseball games. All that the program
    needs for this is comprehensive data about the games, which can be
    accumulated mechanically and which have since become more detailed due
    to improved image recognition and sensors. From these data, the program
    extracts the decisive moments and players of a game, recognizes
    characteristic patterns throughout the course of play (such as
    "extending an early lead," "a dramatic comeback," etc.), and on this
    basis generates its own report. Regarding the reports themselves, a
    number of variables can be determined in advance, for instance whether
    the story should be written from the perspective of a neutral observer
    or from the standpoint of one of the two teams. If writing about little
    league games, the program can be instructed to ignore the errors made by
    children -- because no parent wants to read about those -- and simply
    focus on their heroics. The algorithm was soon patented, and a start-up
    business was created from the original interdisciplinary research
    project: Narrative Science. In addition to sport reports it now offers
    texts of all sorts, but above all financial reports -- another field for
    which there is a great deal of available data. These texts have been
    published by reputable media outlets such as the business magazine
    *Forbes*, in which their authorship []{#Page_107 type="pagebreak"
    title="107"}is credited to "Narrative Science." Although these
    contributions are still limited to relatively simple topics, this will
    not remain the case for long. When asked about the percentage of news
    that would be written by computers 15 years from now, Narrative
    Science\'s chief technology officer and co-founder Kristian Hammond
    confidently predicted "\[m\]ore than 90 percent." He added that, within
    the next five years, an algorithm could even win a Pulitzer
    Prize.[^86^](#c2-note-0086){#c2-note-0086a} This may be blatant hype and
    self-promotion but, as a general estimation, Hammond\'s assertion is not
    entirely beyond belief. It remains to be seen whether algorithms will
    replace or simply supplement traditional journalism. Yet because media
    companies are now under strong financial pressure, it is certainly
    reasonable to predict that many journalistic texts will be automated in
    the future. Entirely different applications, however, have also been
    conceived. Alexander Pschera, for instance, foresees a new age in the
    relationship between humans and nature, for, as soon as animals are
    equipped with transmitters and sensors and are thus able to tell their
    own stories through the appropriate software, they will be regarded as
    individuals and not merely as generic members of a
    species.[^87^](#c2-note-0087){#c2-note-0087a}

    We have not yet reached this point. However, given that the CIA has also
    expressed interest in Narrative Science and has invested in it through
    its venture-capital firm In-Q-Tel, there are indications that
    applications are being developed beyond the field of journalism. For the
    purpose of spreading propaganda, for instance, algorithms can easily be
    used to create a flood of entries on online forums and social mass
    media.[^88^](#c2-note-0088){#c2-note-0088a} Narrative Science is only
    one of many companies offering automated text analysis and production.
    As implemented by IBM and other firms, so-called E-discovery software
    promises to reduce dramatically the amount of time and effort required
    to analyze the constantly growing numbers of files that are relevant to
    complex legal cases. Without such software, it would be impossible in
    practice for lawyers to deal with so many documents. Numerous bots
    (automated editing programs) are active in the production of Wikipedia
    as well. Whereas, in the German edition, bots are forbidden from writing
    their own articles, this is not the case in the Swedish version.
    Measured by the number of entries, the latter is now the second-largest
    edition of the online encyclopedia in the []{#Page_108 type="pagebreak"
    title="108"}world, for, in the summer of 2013, a single bot contributed
    more than 200,000 articles to it.[^89^](#c2-note-0089){#c2-note-0089a}
    Since 2013, moreover, the company Epagogix has offered software that
    uses histor­ical data to evaluate the market potential of film scripts.
    At least one major Hollywood studio uses this software behind the backs
    of scriptwriters and directors, for, according to the company\'s CEO,
    the latter would be "nervous" to learn that their creative work was
    being analyzed in such a way.[^90^](#c2-note-0090){#c2-note-0090a}
    Think, too, of the typical statement that is made at the beginning of a
    call to a telephone hotline -- "This call may be recorded for training
    purposes." Increasingly, this training is not intended for the employees
    of the call center but rather for algorithms. The latter are expected to
    learn how to recognize the personality type of the caller and, on that
    basis, to produce an appropriate script to be read by its poorly
    educated and part-time human
    co-workers.[^91^](#c2-note-0091){#c2-note-0091a} Another example is the
    use of algorithms to grade student
    essays,[^92^](#c2-note-0092){#c2-note-0092a} or ... But there is no need
    to expand this list any further. Even without additional references to
    comparable developments in the fields of image, sound, language, and
    film analysis, it is clear by now that, on many fronts, the borders
    between the creative and the mechanical have
    shifted.[^93^](#c2-note-0093){#c2-note-0093a}
    :::

    ::: {.section}
    ### Dynamic algorithms {#c2-sec-0021}

    The algorithms used for such tasks, however, are no longer simple
    sequences of static instructions. They are no longer repeated unchanged,
    over and over again, but are dynamic and adaptive to a high degree. The
    computing power available today is used to write programs that modify
    and improve themselves semi-automatically and in response to feedback.

    What this means can be illustrated by the example of evolutionary and
    self-learning algorithms. An evolutionary algorithm is developed in an
    iterative process that continues to run until the desired result has
    been achieved. In most cases, the values of the variables of the first
    generation of algorithms are chosen at random in order to diminish the
    influence of the programmer\'s presuppositions on the results. These
    cannot be avoided entirely, however, because the type of variables
    (independent of their value) has to be determined in the first place. I
    will return to this problem later on. This is []{#Page_109
    type="pagebreak" title="109"}followed by a phase of evaluation: the
    output of every tested algorithm is evaluated according to how close it
    is to the desired solution. The best are then chosen and combined with
    one another. In addition, mutations (that is, random changes) are
    introduced. These steps are then repeated as often as necessary until,
    according to the specifications in question, the algorithm is
    "sufficient" or cannot be improved any further. By means of intensive
    computational processes, algorithms are thus "cultivated"; that is,
    large numbers of these are tested instead of a single one being designed
    analytically and then implemented. At the heart of this pursuit is a
    functional solution that proves itself experimentally and in practice,
    but about which it might no longer be possible to know why it functions
    or whether it actually is the best possible solution. The fundamental
    methods behind this process largely derive from the 1970s (the first
    stage of artificial intelligence), the difference being that today they
    can be carried out far more effectively. One of the best-known examples
    of an evolutionary algorithm is that of Google Flu Trends. In order to
    predict which regions will be especially struck by the flu in a given
    year, it evaluates the geographic distribution of internet searches for
    particular terms ("cold remedies," for instance). To develop the
    program, Google tested 450 million different models until one emerged
    that could reliably identify local flu epidemics one to two weeks ahead
    of the national health authorities.[^94^](#c2-note-0094){#c2-note-0094a}

    In pursuits of this magnitude, the necessary processes can only be
    administered by computer programs. The series of tests are no longer
    conducted by programmers but rather by algorithms. In short, algorithms
    are implemented in order to write new algorithms or determine their
    variables. If this reflexive process, in turn, is built into an
    algorithm, then the latter becomes "self-learning": the programmers do
    not set the rules for its execution but rather the rules according to
    which the algorithm is supposed to know how to accomplish a particular
    goal. In many cases, the solution strategies are so complex that they
    are incomprehensible in retrospect. They can no longer be tested
    logically, only experimentally. Such algorithms are essentially black
    boxes -- objects that can only be understood by their outer behavior but
    whose internal structure cannot be known.[]{#Page_110 type="pagebreak"
    title="110"}

    Automatic facial recognition, as used in surveillance technologies and
    for authorizing access to certain things, is based on the fact that
    computers can evaluate large numbers of facial images, first to produce
    a general model for a face, then to identify the variables that make a
    face unique and therefore recognizable. With so-called "unsupervised" or
    "deep-learning" algorithms, some developers and companies have even
    taken this a step further: computers are expected to extract faces from
    unstructured images -- that is, from volumes of images that contain
    images both with faces and without them -- and to do so without
    possessing in advance any model of the face in question. So far, the
    extraction and evaluation of unknown patterns from unstructured material
    has only been achieved in the case of very simple patterns -- with edges
    or surfaces in images, for instance -- for it is extremely complex and
    computationally intensive to program such learning processes. In recent
    years, however, there have been enormous leaps in available computing
    power, and both the data inputs and the complexity of the learning
    models have increased exponentially. Today, on the basis of simple
    patterns, algorithms are developing improved recognition of the complex
    content of images. They are refining themselves on their own. The term
    "deep learning" is meant to denote this very complexity. In 2012, Google
    was able to demonstrate the performance capacity of its new programs in
    an impressive manner: from a collection of randomly chosen YouTube
    videos, analyzed in a cluster by 1,000 computers with 16,000 processors,
    it was possible to create a model in just three days that increased
    facial recognition in unstructured images by 70
    percent.[^95^](#c2-note-0095){#c2-note-0095a} Of course, the algorithm
    does not "know" what a face is, but it reliably recognizes a class of
    forms that humans refer to as a face. One advantage of a model that is
    not created on the basis of prescribed parameters is that it can also
    identify faces in non-standard situ­ations (for instance if a person is
    in the background, if a face is half-concealed, or if it has been
    recorded at a sharp angle). Thanks to this technique, it is possible to
    search the content of images directly and not, as before, primarily by
    searching their descriptions. Such algorithms are also being used to
    identify people in images and to connect them in social networks with
    the profiles of the people in question, and this []{#Page_111
    type="pagebreak" title="111"}without any cooperation from the users
    themselves. Such algorithms are also expected to assist in directly
    controlling activity in "unstructured" reality, for instance in
    self-driving cars or other autonomous mobile applications that are of
    great interest to the military in particular.

    Algorithms of this sort can react and adjust themselves directly to
    changes in the environment. This feedback, however, also shortens the
    timeframe within which they are able to generate repetitive and
    therefore predictable results. Thus, algorithms and their predictive
    powers can themselves become unpredictable. Stock markets have
    frequently experi­enced so-called "sub-second extreme events"; that is,
    price fluctuations that happen in less than a
    second.[^96^](#c2-note-0096){#c2-note-0096a} Dramatic "flash crashes,"
    however, such as that which occurred on May 6, 2010, when the Dow Jones
    Index dropped almost a thousand points in a few minutes (and was thus
    perceptible to humans), have not been terribly
    uncommon.[^97^](#c2-note-0097){#c2-note-0097a} With the introduction of
    voice commands on mobile phones (Apple\'s Siri, for example, which came
    out in 2011), programs based on self-learning algorithms have now
    reached the public at large and have infiltrated increased areas of
    everyday life.
    :::

    ::: {.section}
    ### Sorting, ordering, extracting {#c2-sec-0022}

    Orders generated by algorithms are a constitutive element of the digital
    condition. On the one hand, the mechanical pre-sorting of the
    (informational) world is a precondition for managing immense and
    unstructured amounts of data. On the other hand, these large amounts of
    data and the computing centers in which they are stored and processed
    provide the material precondition for developing increasingly complex
    algorithms. Necessities and possibilities are mutually motivating one
    another.[^98^](#c2-note-0098){#c2-note-0098a}

    Perhaps the best-known algorithms that sort the digital infosphere and
    make it usable in its present form are those of search engines, above
    all Google\'s PageRank. Thanks to these, we can find our way around in a
    world of unstructured information and transfer increasingly larger parts
    of the (informational) world into the order of unstructuredness without
    giving rise to the "Library of Babel." Here, "unstructured" means that
    there is no prescribed order such as (to stick []{#Page_112
    type="pagebreak" title="112"}with the image of the library) a cataloging
    system that assigns to each book a specific place on a shelf. Rather,
    the books are spread all over the place and are dynamically arranged,
    each according to a search, so that the appropriate books for each
    visitor are always standing ready at the entrance. Yet the metaphor of
    books being strewn all about is problematic, for "unstructuredness" does
    not simply mean the absence of any structure but rather the presence of
    another type of order -- a meta-structure, a potential for order -- out
    of which innumerable specific arrangements can be generated on an ad hoc
    basis. This meta-structure is created by algorithms. They subsequently
    derive from it an actual order, which the user encounters, for instance,
    when he or she scrolls through a list of hits produced by a search
    engine. What the user does not see are the complex preconditions for
    assembling the search results. By the middle of 2014, according to the
    company\'s own information, the Google index alone included more than a
    hundred million gigabytes of data.

    Originally (that is, in the second half of the 1990s), Page­Rank
    functioned in such a way that the algorithm analyzed the structure of
    links on the World Wide Web, first by noting the number of links that
    referred to a given document, and second by evaluating the "relevance"
    of the site that linked to the document in question. The relevance of a
    site, in turn, was determined by the number of links that led to it.
    From these two variables, every document registered by the search engine
    was assigned a value, the PageRank. The latter served to present the
    documents found with a given search term as a hierarchical list (search
    results), whereby the document with the highest value was listed
    first.[^99^](#c2-note-0099){#c2-note-0099a} This algorithm was extremely
    successful because it reduced the unfathomable chaos of the World Wide
    Web to a task that could be managed without difficulty by an individual
    user: inputting a search term and selecting from one of the presented
    "hits." The simplicity of the user\'s final choice, together with the
    quality of the algorithmic pre-selection, quickly pushed Google past its
    competition.

    Underlying this process is the assumption that every link is an
    indication of relevance, and that links from frequently linked (that is,
    popular) sources are more important than those from less frequently
    linked (that is, unpopular) sources. []{#Page_113 type="pagebreak"
    title="113"}The advantage of this assumption is that it can be
    understood in terms of purely quantitative variables and it is not
    necessary to have any direct understanding of a document\'s content or
    of the context in which it exists.

    In the middle of the 1990s, when the first version of the PageRank
    algorithm was developed, the problem of judging the relevance of
    documents whose content could only partially be evaluated was not a new
    one. Science administrators at universities and funding agencies had
    been facing this difficulty since the 1950s. During the rise of the
    knowledge economy, the number of scientific publications increased
    rapidly. Scientific fields, perspectives, and methods also multiplied
    and diversified during this time, so that even experts could not survey
    all of the work being done in their own areas of
    research.[^100^](#c2-note-0100){#c2-note-0100a} Thus, instead of reading
    and evaluating the content of countless new publications, they shifted
    their analysis to a higher level of abstraction. They began to count how
    often an article or book was cited and applied this information to
    assess the value of a given author or
    publication.[^101^](#c2-note-0101){#c2-note-0101a} The underlying
    assumption was (and remains) that only important things are referenced,
    and therefore every citation and every reference can be regarded as an
    indirect vote for something\'s relevance.

    In both cases -- classifying a chaotic sphere of information and
    administering an expanding industry of knowledge -- the challenge is to
    develop dynamic orders for rapidly changing fields, enabling the
    evaluation of the importance of individual documents without knowledge
    of their content. Because the analysis of citations or links operates on
    a purely quantitative basis, large amounts of data can be quickly
    structured with them, and especially relevant positions can be
    determined. The second advantage of this approach is that it does not
    require any assumptions about the contours of different fields or their
    relationships to one another. This enables the organ­ization of
    disordered or dynamic content. In both cases, references made by the
    actors themselves are used: citations in a scientific text, links on
    websites. Their value for establishing the order of a field as a whole,
    however, is only visible in the aggregate, for instance in the frequency
    with which a given article is
    cited.[^102^](#c2-note-0102){#c2-note-0102a} In both cases, the shift
    from analyzing "data" (the content of documents in the traditional
    sense) to []{#Page_114 type="pagebreak" title="114"}analyzing
    "meta-data" (describing documents in light of their relationships to one
    another) is a precondition for being able to make any use at all of
    growing amounts of information.[^103^](#c2-note-0103){#c2-note-0103a}
    This shift introduced a new level of abstraction. Information is no
    longer understood as a representation of external reality; its
    significance is not evaluated with regard to the relation between
    "information" and "the world," for instance with a qualitative criterion
    such as "true"/"false." Rather, the sphere of information is treated as
    a self-referential, closed world, and documents are accordingly only
    evaluated in terms of their position within this world, though with
    quantitative criteria such as "central"/"peripheral."

    Even though the PageRank algorithm was highly effective and assisted
    Google\'s rapid ascent to a market-leading position, at the beginning it
    was still relatively simple and its mode of operation was at least
    partially transparent. It followed the classical statistical model of an
    algorithm. A document or site referred to by many links was considered
    more important than one to which fewer links
    referred.[^104^](#c2-note-0104){#c2-note-0104a} The algorithm analyzed
    the given structural order of information and determined the position of
    every document therein, and this was largely done independently of the
    context of the search and without making any assumptions about it. This
    approach functioned relatively well as long as the volume of information
    did not exceed a certain size, and as long as the users and their
    searches were somewhat similar to one another. In both respects, this is
    no longer the case. The amount of information to be pre-sorted is
    increasing, and users are searching in all possible situations and
    places for everything under the sun. At the time Google was founded, no
    one would have thought to check the internet, quickly and while on
    one\'s way, for today\'s menu at the restaurant round the corner. Now,
    thanks to smartphones, this is an obvious thing to do.
    :::

    ::: {.section}
    ### Algorithm clouds {#c2-sec-0023}

    In order to react to such changes in user behavior -- and simultaneously
    to advance it further -- Google\'s search algorithm is constantly being
    modified. It has become increasingly complex and has assimilated a
    greater amount of contextual []{#Page_115 type="pagebreak"
    title="115"}information, which influences the value of a site within
    Page­Rank and thus the order of search results. The algorithm is no
    longer a fixed object or unchanging recipe but is transforming into a
    dynamic process, an opaque cloud composed of multiple interacting
    algorithms that are continuously refined (between 500 and 600 times a
    year, according to some estimates). These ongoing developments are so
    extensive that, since 2003, several new versions of the algorithm cloud
    have appeared each year with their own names. In 2014 alone, Google
    carried out 13 large updates, more than ever
    before.[^105^](#c2-note-0105){#c2-note-0105a}

    These changes continue to bring about new levels of abstraction, so that
    the algorithm takes into account add­itional variables such as the time
    and place of a search, alongside a person\'s previously recorded
    behavior -- but also his or her involvement in social environments, and
    much more. Personalization and contextualization were made part of
    Google\'s search algorithm in 2005. At first it was possible to choose
    whether or not to use these. Since 2009, however, they have been a fixed
    and binding component for everyone who conducts a search through
    Google.[^106^](#c2-note-0106){#c2-note-0106a} By the middle of 2013, the
    search algorithm had grown to include at least 200
    variables.[^107^](#c2-note-0107){#c2-note-0107a} What is relevant is
    that the algorithm no longer determines the position of a document
    within a dynamic informational world that exists for everyone
    externally. Instead, it now assigns a rank to their content within a
    dynamic and singular universe of information that is tailored to every
    individual user. For every person, an entirely different order is
    created instead of just an excerpt from a previously existing order. The
    world is no longer being represented; it is generated uniquely for every
    user and then presented. Google is not the only company that has gone
    down this path. Orders produced by algorithms have become increasingly
    oriented toward creating, for each user, his or her own singular world.
    Facebook, dating services, and other social mass media have been
    pursuing this approach even more radically than Google.
    :::

    ::: {.section}
    ### From the data shadow to the synthetic profile {#c2-sec-0024}

    This form of generating the world requires not only detailed information
    about the external world (that is, the reality []{#Page_116
    type="pagebreak" title="116"}shared by everyone) but also information
    about every individual\'s own relation to the
    latter.[^108^](#c2-note-0108){#c2-note-0108a} To this end, profiles are
    established for every user, and the more extensive they are, the better
    they are for the algorithms. A profile created by Google, for instance,
    identifies the user on three levels: as a "knowledgeable person" who is
    informed about the world (this is established, for example, by recording
    a person\'s searches, browsing behavior, etc.), as a "physical person"
    who is located and mobile in the world (a component established, for
    example, by tracking someone\'s location through a smartphone, sensors
    in a smart home, or body signals), and as a "social person" who
    interacts with other people (a facet that can be determined, for
    instance, by following someone\'s activity on social mass
    media).[^109^](#c2-note-0109){#c2-note-0109a}

    Unlike the situation in the 1990s, however, these profiles are no longer
    simply representations of singular people -- they are not "digital
    personas" or "data shadows." They no longer represent what is
    conventionally referred to as "individuality," in the sense of a
    spatially and temporally uniform identity. On the one hand, profiles
    rather consist of sub-individual elements -- of fragments of recorded
    behavior that can be evaluated on the basis of a particular search
    without promising to represent a person as a whole -- and they consist,
    on the other hand, of clusters of multiple people, so that the person
    being modeled can simultaneously occupy different positions in time.
    This temporal differentiation enables predictions of the following sort
    to be made: a person who has already done *x* will, with a probability
    of *y*, go on to engage in activity *z*. It is in this way that Amazon
    assembles its book recommendations, for the company knows that, within
    the cluster of people that constitutes part of every person\'s profile,
    a certain percentage of them have already gone through this sequence of
    activity. Or, as the data-mining company Science Rockstars (!) once
    pointedly expressed on its website, "Your next activity is a function of
    the behavior of others and your own past."

    Google and other providers of algorithmically generated orders have been
    devoting increased resources to the prognostic capabilities of their
    programs in order to make the confusing and potentially time-consuming
    step of the search obsolete. The goal is to minimize a rift that comes
    to light []{#Page_117 type="pagebreak" title="117"}in the act of
    searching, namely that between the world as everyone experiences it --
    plagued by uncertainty, for searching implies "not knowing something" --
    and the world of algorithmically generated order, in which certainty
    prevails, for everything has been well arranged in advance. Ideally,
    questions should be answered before they are asked. The first attempt by
    Google to eliminate this rift is called Google Now, and its slogan is
    "The right information at just the right time." The program, which was
    originally developed as an app but has since been made available on
    Chrome, Google\'s own web browser, attempts to anticipate, on the basis
    of existing data, a user\'s next step, and to provide the necessary
    information before it is searched for in order that such steps take
    place efficiently. Thus, for instance, it draws upon information from a
    user\'s calendar in order to figure out where he or she will have to go
    next. On the basis of real-time traffic data, it will then suggest the
    optimal way to get there. For those driving cars, the amount of traffic
    on the road will be part of the equation. This is ascertained by
    analyzing the motion profiles of other drivers, which will allow the
    program to determine whether the traffic is flowing or stuck in a jam.
    If enough historical data is taken into account, the hope is that it
    will be possible to redirect cars in such a way that traffic jams should
    no longer occur.[^110^](#c2-note-0110){#c2-note-0110a} For those who use
    public transport, Google Now evaluates real-time data about the
    locations of various transport services. With this information, it will
    suggest the optimal route and, depending on the calculated travel time,
    it will send a reminder (sometimes earlier, sometimes later) when it is
    time to go. That which Google is just experimenting with and testing in
    a limited and unambiguous context is already part of Facebook\'s
    everyday operations. With its EdgeRank algorithm, Facebook already
    organizes everyone\'s newsfeed, entirely in the background and without
    any explicit user interaction. On the basis of three variables -- user
    affinity (previous interactions between two users), content weight (the
    rate of interaction between all users and a specific piece of content),
    and currency (the age of a post) -- the algorithm selects content from
    the status updates made by one\'s friends to be displayed on one\'s own
    page.[^111^](#c2-note-0111){#c2-note-0111a} In this way, Facebook
    ensures that the stream of updates remains easy to scroll through, while
    also -- it is safe []{#Page_118 type="pagebreak" title="118"}to assume
    -- leaving enough room for advertising. This potential for manipulation,
    which algorithms possess as they work away in the background, will be
    the topic of my next section.
    :::

    ::: {.section}
    ### Variables and correlations {#c2-sec-0025}

    Every complex algorithm contains a multitude of variables and usually an
    even greater number of ways to make connections between them. Every
    variable and every relation, even if they are expressed in technical or
    mathematical terms, codifies assumptions that express a specific
    position in the world. There can be no purely descriptive variables,
    just as there can be no such thing as "raw
    data."[^112^](#c2-note-0112){#c2-note-0112a} Both -- data and variables
    -- are always already "cooked"; that is, they are engendered through
    cultural operations and formed within cultural
    categories.[^113^](#c2-note-0113){#c2-note-0113a} With every use of
    produced data and with every execution of an algorithm, the assumptions
    embedded in them are activated, and the positions contained within them
    have effects on the world that the algorithm generates and presents.

    As already mentioned, the early version of the PageRank algorithm was
    essentially based on the rather simple assumption that frequently linked
    content is more relevant than content that is only seldom linked to, and
    that links to sites that are themselves frequently linked to should be
    given more weight than those found on sites with fewer links to them.
    Replacing the qualitative criterion of "relevance" with the quantitative
    criterion of "popularity" not only proved to be tremendously practical
    but also extremely consequential, for search engines not only describe
    the world; they create it as well. That which search engines put at the
    top of this list is not just already popular but will remain so. A third
    of all users click on the first search result, and around 95 percent do
    not look past the first 10.[^114^](#c2-note-0114){#c2-note-0114a} Even
    the earliest version of the PageRank algorithm did not represent
    existing reality but rather (co-)constituted it.

    Popularity, however, is not the only element with which algorithms
    actively give shape to the user\'s world. A search engine can only sort,
    weigh, and make available that portion of information which has already
    been incorporated into its index. Everything else remains invisible. The
    relation between []{#Page_119 type="pagebreak" title="119"}the recorded
    part of the internet (the "surface web") and the unrecorded part (the
    "deep web") is difficult to determine. Estimates have varied between
    ratios of 1:5 and 1:500.[^115^](#c2-note-0115){#c2-note-0115a} There are
    many reasons why content might be inaccessible to search engines.
    Perhaps the information has been saved in formats that search engines
    cannot read or can only poorly read, or perhaps it has been hidden
    behind proprietary barriers such as paywalls. In order to expand the
    realm of things that can be exploited by their algorithms, the operators
    of search engines offer extensive guidance about how providers should
    design their sites so that search tools can find them in an optimal
    manner. It is not necessary to follow this guidance, but given the
    central role of search engines in sorting and filtering information, it
    is clear that they exercise a great deal of power by setting the
    standards.[^116^](#c2-note-0116){#c2-note-0116a}

    That the individual must "voluntarily" submit to this authority is
    typical of the power of networks, which do not give instructions but
    rather constitute preconditions. Yet it is in the interest of (almost)
    every producer of information to optimize its position in a search
    engine\'s index, and thus there is a strong incentive to accept the
    preconditions in question. Considering, moreover, the nearly
    monopolistic character of many providers of algorithmically generated
    orders and the high price that one would have to pay if one\'s own site
    were barely (or not at all) visible to others, the term "voluntary"
    begins to take on a rather foul taste. This is a more or less subtle way
    of pre-formatting the world so that it can be optimally recorded by
    algorithms.[^117^](#c2-note-0117){#c2-note-0117a}

    The providers of search engines usually justify such methods in the name
    of offering "more efficient" services and "more relevant" results.
    Ostensibly technical and neutral terms such as "efficiency" and
    "relevance" do little, however, to conceal the political nature of
    defining variables. Efficient with respect to what? Relevant for whom?
    These are issues that are decided without much discussion by the
    developers and institutions that regard the algorithms as their own
    property. Every now and again such questions incite public debates,
    mostly when the interests of one provider happen to collide with those
    of its competition. Thus, for instance, the initiative known as
    FairSearch has argued that Google abuses its market power as a search
    engine to privilege its []{#Page_120 type="pagebreak" title="120"}own
    content and thus to showcase it prominently in search
    results.[^118^](#c2-note-0118){#c2-note-0118a} FairSearch\'s
    representatives alleged, for example, that Google favors its own map
    service in the case of address searches and its own price comparison
    service in the case of product searches. The argument had an effect. In
    November of 2010, the European Commission initiated an antitrust
    investigation against Google. In 2014, a settlement was proposed that
    would have required the American internet giant to pay certain
    concessions, but the members of the Commission, the EU Parliament, and
    consumer protection agencies were not satisfied with the agreement. In
    April 2015, the anti-trust proceedings were recommenced by a newly
    appointed Commission, its reasoning being that "Google does not apply to
    its own comparison shopping service the system of penalties which it
    applies to other comparison shopping services on the basis of defined
    parameters, and which can lead to the lowering of the rank in which they
    appear in Google\'s general search results
    pages."[^119^](#c2-note-0119){#c2-note-0119a} In other words, the
    Commission accused the company of manipulating search results to its own
    advantage and the disadvantage of users.

    This is not the only instance in which the political side of search
    algorithms has come under public scrutiny. In the summer of 2012, Google
    announced that sites with higher numbers of copyright removal notices
    would henceforth appear lower in its
    rankings.[^120^](#c2-note-0120){#c2-note-0120a} The company thus
    introduced explicitly political and economic criteria in order to
    influence what, according to the standards of certain powerful players
    (such as film studios), users were able to
    view.[^121^](#c2-note-0121){#c2-note-0121a} In this case, too, it would
    be possible to speak of the personalization of searching, except that
    the heart of the situation was not the natural person of the user but
    rather the juridical person of the copyright holder. It was according to
    the latter\'s interests and preferences that searching was being
    reoriented. Amazon has employed similar tactics. In 2014, the online
    merchant changed its celebrated recommendation algorithm with the goal
    of reducing the presence of books released by irritating publishers that
    dared to enter into price negotiations with the
    company.[^122^](#c2-note-0122){#c2-note-0122a}

    Controversies over the methods of Amazon or Google, however, are the
    exception rather than the rule. Necessary (but never neutral) decisions
    about recording and evaluating data []{#Page_121 type="pagebreak"
    title="121"}with algorithms are being made almost all the time without
    any discussion whatsoever. The logic of the original Page­Rank algorithm
    was criticized as early as the year 2000 for essentially representing
    the commercial logic of mass media, systematically disadvantaging
    less-popular though perhaps otherwise relevant information, and thus
    undermining the "substantive vision of the web as an inclusive
    democratic space."[^123^](#c2-note-0123){#c2-note-0123a} The changes to
    the search algorithm that have been adopted since then may have modified
    this tendency, but they have certainly not weakened it. In addition to
    concentrating on what is popular, the new variables privilege recently
    uploaded and constantly updated content. The selection of search results
    is now contingent upon the location of the user, and it takes into
    account his or her social networking. It is oriented toward the average
    of a dynamically modeled group. In other words, Google\'s new algorithm
    favors that which is gaining popularity within a user\'s social network.
    The global village is thus becoming more and more
    provincial.[^124^](#c2-note-0124){#c2-note-0124a}
    :::

    ::: {.section}
    ### Data behaviorism {#c2-sec-0026}

    Algorithms such as Google\'s thus reiterate and reinforce a tendency
    that has already been apparent on both the level of individual users and
    that of communal formations: in order to deal with the vast amounts and
    complexity of information, they direct their gaze inward, which is not
    to say toward the inner being of individual people. As a level of
    reference, the individual person -- with an interior world and with
    ideas, dreams, and wishes -- is irrelevant. For algorithms, people are
    black boxes that can only be understood in terms of their reactions to
    stimuli. Consciousness, perception, and intention do not play any role
    for them. In this regard, the legal philosopher Antoinette Rouvroy has
    written about "data behaviorism."[^125^](#c2-note-0125){#c2-note-0125a}
    With this, she is referring to the gradual return of a long-discredited
    approach to behavioral psychology that postulated that human behavior
    could be explained, predicted, and controlled purely by our outwardly
    observable and measurable actions.[^126^](#c2-note-0126){#c2-note-0126a}
    Psychological dimensions were ignored (and are ignored in this new
    version of behaviorism) because it is difficult to observe them
    empiric­ally. Accordingly, this approach also did away with the need
    []{#Page_122 type="pagebreak" title="122"}to question people directly or
    take into account their subjective experiences, thoughts, and feelings.
    People were regarded (and are so again today) as unreliable, as poor
    judges of themselves, and as only partly honest when disclosing
    information. Any strictly empirical science, or so the thinking went,
    required its practitioners to disregard everything that did not result
    in physical and observable action. From this perspective, it was
    possible to break down even complex behavior into units of stimulus and
    reaction. This led to the conviction that someone observing another\'s
    activity always knows more than the latter does about himself or herself
    for, unlike the person being observed, whose impressions can be
    inaccurate, the observer is in command of objective and complete
    information. Even early on, this approach faced a wave of critique. It
    was held to be mechanistic, reductionist, and authoritarian because it
    privileged the observing scientist over the subject. In practice, it
    quickly ran into its own limitations: it was simply too expensive and
    complicated to gather data about human behavior.

    Yet that has changed radically in recent years. It is now possible to
    measure ever more activities, conditions, and contexts empirically.
    Algorithms like Google\'s or Amazon\'s form the technical backdrop for
    the revival of a mechanistic, reductionist, and authoritarian approach
    that has resurrected the long-lost dream of an objective view -- the
    view from nowhere.[^127^](#c2-note-0127){#c2-note-0127a} Every critique
    of this positivistic perspective -- that every measurement result, for
    instance, reflects not only the measured but also the measurer -- is
    brushed aside with reference to the sheer amounts of data that are now
    at our disposal.[^128^](#c2-note-0128){#c2-note-0128a} This attitude
    substantiates the claim of those in possession of these new and
    comprehensive powers of observation (which, in addition to Google and
    Facebook, also includes the intelligence services of Western nations),
    namely that they know more about individuals than individuals know about
    themselves, and are thus able to answer our questions before we ask
    them. As mentioned above, this is a goal that Google expressly hopes to
    achieve.

    At issue with this "inward turn" is thus the space of communal
    formations, which is constituted by the sum of all of the activities of
    their interacting participants. In this case, however, a communal
    formation is not consciously created []{#Page_123 type="pagebreak"
    title="123"}and maintained in a horizontal process, but rather
    synthetic­ally constructed as a computational function. Depending on the
    context and the need, individuals can either be assigned to this
    function or removed from it. All of this happens behind the user\'s back
    and in accordance with the goals and pos­itions that are relevant to the
    developers of a given algorithm, be it to optimize profit or
    surveillance, create social norms, improve services, or whatever else.
    The results generated in this way are sold to users as a personalized
    and efficient service that provides a quasi-magical product. Out of the
    enormous haystack of searchable information, results are generated that
    are made to seem like the very needle that we have been looking for. At
    best, it is only partially transparent how these results came about and
    which positions in the world are strengthened or weakened by them. Yet,
    as long as the needle is somewhat functional, most users are content,
    and the algorithm registers this contentedness to validate itself. In
    this dynamic world of unmanageable complexity, users are guided by a
    sort of radical, short-term pragmatism. They are happy to have the world
    pre-sorted for them in order to improve their activity in it. Regarding
    the matter of whether the information being provided represents the
    world accurately or not, they are unable to formulate an adequate
    assessment for themselves, for it is ultimately impossible to answer
    this question without certain resources. Outside of rapidly shrinking
    domains of specialized or everyday know­ledge, it is becoming
    increasingly difficult to gain an overview of the world without
    mechanisms that pre-sort it. Users are only able to evaluate search
    results pragmatically; that is, in light of whether or not they are
    helpful in solving a concrete problem. In this regard, it is not
    paramount that they find the best solution or the correct answer but
    rather one that is available and sufficient. This reality lends an
    enormous amount of influence to the institutions and processes that
    provide the solutions and answers.[]{#Page_124 type="pagebreak"
    title="124"}
    :::
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#c2-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#c2-note-0001a){#c2-note-0001}  André Rottmann, "Reflexive Systems
    of Reference: Approximations to 'Referentialism' in Contemporary Art,"
    trans. Gerrit Jackson, in Dirk Snauwaert et al. (eds), *Rehabilitation:
    The Legacy of the Modern Movement* (Ghent: MER, 2010), pp. 97--106, at
    99.

    [2](#c2-note-0002a){#c2-note-0002}  The recognizability of the sources
    distinguishes these processes from plagiarism. The latter operates with
    the complete opposite aim, namely that of borrowing sources without
    acknow­ledging them.

    [3](#c2-note-0003a){#c2-note-0003}  Ulf Poschardt, *DJ Culture* (London:
    Quartet Books, 1998), p. 34.

    [4](#c2-note-0004a){#c2-note-0004}  Theodor W. Adorno, *Aesthetic
    Theory*, trans. Robert Hullot-Kentor (Minneapolis, MN: University of
    Minnesota Press, 1997), p. 151.

    [5](#c2-note-0005a){#c2-note-0005}  Peter Bürger, *Theory of the
    Avant-Garde*, trans. Michael Shaw (Minneapolis, MN: University of
    Minnesota Press, 1984).

    [6](#c2-note-0006a){#c2-note-0006}  Felix Stalder, "Neun Thesen zur
    Remix-Kultur," *i-rights.info* (May 25, 2009), online.

    [7](#c2-note-0007a){#c2-note-0007}  Florian Cramer, *Exe.cut(up)able
    Statements: Poetische Kalküle und Phantasmen des selbstausführenden
    Texts* (Munich: Wilhelm Fink, 2011), pp. 9--10 \[--trans.\]

    [8](#c2-note-0008a){#c2-note-0008}  McLuhan stressed that, despite using
    the alphabet, every manuscript is unique because it not only depended on
    the sequence of letters but also on the individual ability of a given
    scribe to []{#Page_185 type="pagebreak" title="185"}lend these letters a
    particular shape. With the rise of the printing press, the alphabet shed
    these last elements of calligraphy and became typography.

    [9](#c2-note-0009a){#c2-note-0009}  Elisabeth L. Eisenstein, *The
    Printing Revolution in Early Modern Europe* (Cambridge: Cambridge
    University Press, 1983), p. 15.

    [10](#c2-note-0010a){#c2-note-0010}  Eisenstein, *The Printing
    Revolution in Early Modern Europe*, p. 204.

    [11](#c2-note-0011a){#c2-note-0011}  The fundamental aspects of these
    conventions were formulated as early as the beginning of the sixteenth
    century; see Michael Giesecke, *Der Buchdruck in der frühen Neuzeit:
    Eine historische Fallstudie über die Durchsetzung neuer Informations-
    und Kommunikationstechnologien* (Frankfurt am Main: Suhrkamp, 1991), pp.
    420--40.

    [12](#c2-note-0012a){#c2-note-0012}  Eisenstein, *The Printing
    Revolution in Early Modern Europe*, p. 49.

    [13](#c2-note-0013a){#c2-note-0013}  In April 2014, the Authors Guild --
    the association of American writers that had sued Google -- filed an
    appeal to overturn the decision and made a public statement demanding
    that a new organization be established to license the digital rights of
    out-of-print books. See "Authors Guild: Amazon was Google's Target,"
    *The Authors Guild: Industry & Advocacy News* (April 11, 2014), online.
    In October 2015, however, the next-highest authority -- the United
    States Court of Appeals for the Second Circuit -- likewise decided in
    Google\'s favor. The Authors Guild promptly announced its intention to
    take the case to the Supreme Court.

    [14](#c2-note-0014a){#c2-note-0014}  Jean-Noël Jeanneney, *Google and
    the Myth of Universal Knowledge: A View from Europe*, trans. Teresa
    Lavender Fagan (Chicago, IL: University of Chicago Press, 2007).

    [15](#c2-note-0015a){#c2-note-0015}  Within the framework of the Images
    for the Future project (2007--14), the Netherlands alone invested more
    than €170 million to digitize the collections of the most important
    audiovisual archives. Over 10 years, the cost of digitizing the entire
    cultural heritage of Europe has been estimated to be around €100
    billion. See Nick Poole, *The Cost of Digitising Europe\'s Cultural
    Heritage: A Report for the Comité des Sages of the European Commission*
    (November 2010), online.

    [16](#c2-note-0016a){#c2-note-0016}  Richard Darnton, "The National
    Digital Public Library Is Launched!", *New York Review of Books* (April
    25, 2013), online.

    [17](#c2-note-0017a){#c2-note-0017}  According to estimates by the
    British Library, so-called "orphan works" alone -- that is, works still
    legally protected but whose right holders are unknown -- make up around
    40 percent of the books in its collection that still fall under
    copyright law. In an effort to alleviate this problem, the European
    Parliament and the European Commission issued a directive []{#Page_186
    type="pagebreak" title="186"}in 2012 concerned with "certain permitted
    uses of orphan works." This has allowed libraries and archives to make
    works available online without permission if, "after carrying out
    diligent searches," the copyright holders cannot be found. What
    qualifies as a "diligent search," however, is so strictly formulated
    that the German Library Association has called the directive
    "impracticable." Deutscher Bibliotheksverband, "Rechtlinie über
    bestimmte zulässige Formen der Nutzung verwaister Werke" (February 27,
    2012), online.

    [18](#c2-note-0018a){#c2-note-0018}  UbuWeb, "Frequently Asked
    Questions," online.

    [19](#c2-note-0019a){#c2-note-0019}  The numbers in this area of
    activity are notoriously unreliable, and therefore only rough estimates
    are possible. It seems credible, however, that the Pirate Bay was
    attracting around a billion page views per month by the end of 2013.
    That would make it the seventy-fourth most popular internet destination.
    See Ernesto, "Top 10 Most Popular Torrent Sites of 2014" (January 4,
    2014), online.

    [20](#c2-note-0020a){#c2-note-0020}  See the documentary film *TPB AFK:
    The Pirate Bay Away from Keyboard* (2013), directed by Simon Klose.

    [21](#c2-note-0021a){#c2-note-0021}  In technical terms, there is hardly
    any difference between a "stream" and a "download." In both cases, a
    complete file is transferred to the user\'s computer and played.

    [22](#c2-note-0022a){#c2-note-0022}  The practice is legal in Germany
    but illegal in Austria, though digitized texts are routinely made
    available there in seminars. See Seyavash Amini Khanimani and Nikolaus
    Forgó, "Rechtsgutachten über die Erforderlichkeit einer freien
    Werknutzung im österreichischen Urheberrecht zur Privilegierung
    elektronisch unterstützter Lehre," *Forum Neue Medien Austria* (January
    2011), online.

    [23](#c2-note-0023a){#c2-note-0023}  Deutscher Bibliotheksverband,
    "Digitalisierung" (2015), online \[--trans\].

    [24](#c2-note-0024a){#c2-note-0024}  David Weinberger, *Everything Is
    Miscellaneous: The Power of the New Digital Disorder* (New York: Times
    Books, 2007).

    [25](#c2-note-0025a){#c2-note-0025}  This is not a question of material
    wealth. Those who are economically or socially marginalized are
    confronted with the same phenomenon. Their primary experience of this
    excess is with cheap goods and junk.

    [26](#c2-note-0026a){#c2-note-0026}  See Gregory Bateson, "Form,
    Substance and Difference," in Bateson, *Steps to an Ecology of Mind:
    Collected Essays in Anthropology, Psychiatry, Evolution and
    Epistemology* (London: Jason Aronson, 1972), pp. 455--71, at 460:
    "\[I\]n fact, what we mean by information -- the elementary unit of
    information -- is *a difference which makes a difference*" (the emphasis
    is original).

    [27](#c2-note-0027a){#c2-note-0027}  Inke Arns and Gabriele Horn,
    *History Will Repeat Itself* (Frankfurt am Main: Revolver, 2007), p.
    42.[]{#Page_187 type="pagebreak" title="187"}

    [28](#c2-note-0028a){#c2-note-0028}  See the film *The Battle of
    Orgreave* (2001), directed by Mike Figgis.

    [29](#c2-note-0029a){#c2-note-0029}  Theresa Winge, "Costuming the
    Imagination: Origins of Anime and Manga Cosplay," *Mechademia* 1 (2006),
    pp. 65--76.

    [30](#c2-note-0030a){#c2-note-0030}  Nicolle Lamerichs, "Stranger than
    Fiction: Fan Identity in Cosplay," *Transformative Works and Cultures* 7
    (2011), online.

    [31](#c2-note-0031a){#c2-note-0031}  The *Oxford English Dictionary*
    defines "selfie" as a "photographic self-portrait; *esp*. one taken with
    a smartphone or webcam and shared via social media."

    [32](#c2-note-0032a){#c2-note-0032}  Odin Kroeger et al. (eds),
    *Geistiges Eigentum und Originalität: Zur Politik der Wissens- und
    Kulturproduktion* (Vienna: Turia + Kant, 2011).

    [33](#c2-note-0033a){#c2-note-0033}  Roland Barthes, "The Death of the
    Author," in Barthes, *Image -- Music -- Text*, trans. Stephen Heath
    (London: Fontana Press, 1977), pp. 142--8.

    [34](#c2-note-0034a){#c2-note-0034}  Heinz Rölleke and Albert
    Schindehütte, *Es war einmal: Die wahren Märchen der Brüder Grimm und
    wer sie ihnen erzählte* (Frankfurt am Main: Eichborn, 2011); and Heiner
    Boehncke, *Marie Hassenpflug: Eine Märchenerzählerin der Brüder Grimm*
    (Darmstadt: Von Zabern, 2013).

    [35](#c2-note-0035a){#c2-note-0035}  Hansjörg Ewert, "Alles nur
    geklaut?", *Zeit Online* (February 26, 2013), online. This is not a new
    realization but has long been a special area of research for
    musicologists. What is new, however, is that it is no longer
    controversial outside of this narrow disciplinary discourse. See Peter
    J. Burkholder, "The Uses of Existing Music: Musical Borrowing as a
    Field," *Notes* 50 (1994), pp. 851--70.

    [36](#c2-note-0036a){#c2-note-0036}  Zygmunt Bauman, *Liquid Modernity*
    (Cambridge: Polity, 2000), p. 56.

    [37](#c2-note-0037a){#c2-note-0037}  Quoted from Eran Schaerf\'s audio
    installation *FM-Scenario: Reality Race* (2013), online.

    [38](#c2-note-0038a){#c2-note-0038}  The number of members, for
    instance, of the two large polit­ical parties in Germany, the Social
    Democratic Party and the Christian Democratic Union, reached its peak at
    the end of the 1970s or the beginning of the 1980s. Both were able to
    increase their absolute numbers for a brief time at the beginning of the
    1990s, when the Christian Democratic Party even reached its absolute
    high point, but this can be explained by a surge in new members after
    reunification. By 2010, both parties already had fewer members than
    Greenpeace, whose 580,000 members make it Germany's largest NGO.
    Parallel to this, between 1970 and 2010, the proportion of people
    without any religious affiliations shrank to approximately 37 percent.
    That there are more churches and political parties today is indicative
    of how difficult []{#Page_188 type="pagebreak" title="188"}it has become
    for any single organization to attract broad strata of society.

    [39](#c2-note-0039a){#c2-note-0039}  Ulrich Beck, *Risk Society: Towards
    a New Modernity*, trans. Mark Ritter (London: SAGE, 1992), p. 135.

    [40](#c2-note-0040a){#c2-note-0040}  Ferdinand Tönnies, *Community and
    Society*, trans. Charles P. Loomis (East Lansing: Michigan State
    University Press, 1957).

    [41](#c2-note-0041a){#c2-note-0041}  Karl Marx and Friedrich Engels,
    "The Manifesto of the Communist Party (1848)," trans. Terrell Carver, in
    *The Cambridge Companion to the Communist Manifesto*, ed. Carver and
    James Farr (Cambridge: Cambridge University Press, 2015), pp. 237--60,
    at 239. For Marx and Engels, this was -- like everything pertaining to
    the dynamics of capitalism -- a thoroughly ambivalent development. For,
    in this case, it finally forced people "to take a down-to-earth view of
    their circumstances, their multifarious relationships" (ibid.).

    [42](#c2-note-0042a){#c2-note-0042}  As early as the 1940s, Karl Polanyi
    demonstrated in *The Great Transformation* (New York: Farrar & Rinehart,
    1944) that the idea of strictly separated spheres, which are supposed to
    be so typical of society, is in fact highly ideological. He argued above
    all that the attempt to implement this separation fully and consistently
    in the form of the free market would destroy the foundations of society
    because both the life of workers and the environment of the market
    itself would be regarded as externalities. For a recent adaptation of
    this argument, see David Graeber, *Debt: The First 5000 Years* (New
    York: Melville House, 2011).

    [43](#c2-note-0043a){#c2-note-0043}  Tönnies's persistent influence can
    be felt, for instance, in Zygmunt Bauman's negative assessment of the
    compunction to strive for community in his *Community: Seeking Safety in
    an Insecure World* (Malden, MA: Blackwell, 2001).

    [44](#c2-note-0044a){#c2-note-0044}  See, for example, Amitai Etzioni,
    *The Third Way to a Good Society* (London: Demos, 2000).

    [45](#c2-note-0045a){#c2-note-0045}  Jean Lave and Étienne Wenger,
    *Situated Learning: Legitimate Peripheral Participation* (Cambridge:
    Cambridge University Press, 1991), p. 98.

    [46](#c2-note-0046a){#c2-note-0046}  Étienne Wenger, *Cultivating
    Communities of Practice: A Guide to Managing Knowledge* (Boston, MA:
    Harvard Business School Press, 2000).

    [47](#c2-note-0047a){#c2-note-0047}  The institutions of the
    disciplinary society -- schools, factories, prisons and hospitals, for
    instance -- were closed. Whoever was inside could not get out.
    Participation was obligatory, and instructions had to be followed. See
    Michel Foucault, *Discipline and Punish: The Birth of the Prison*,
    trans. Alan Sheridan (New York: Pantheon Books, 1977).[]{#Page_189
    type="pagebreak" title="189"}

    [48](#c2-note-0048a){#c2-note-0048}  Weber famously defined power as
    follows: "Power is the probability that one actor within a social
    relationship will be in a position to carry out his own will despite
    resistance, regardless of the basis on which this probability rests."
    Max Weber, *Economy and Society: An Outline of Interpretive Sociology*,
    trans. Guenther Roth and Claus Wittich (Berkeley, CA: University of
    California Press, 1978), p. 53.

    [49](#c2-note-0049a){#c2-note-0049}  For those in complete despair, the
    following tip is provided: "To get more likes, start liking the photos
    of random people." Such a strategy, it seems, is more likely to increase
    than decrease one's hopelessness. The quotations are from "How to Get
    More Likes on Your Instagram Photos," *WikiHow* (2016), online.

    [50](#c2-note-0050a){#c2-note-0050}  Jeremy Gilbert, *Democracy and
    Collectivity in an Age of Individualism* (London: Pluto Books, 2013).

    [51](#c2-note-0051a){#c2-note-0051}  Diedrich Diederichsen,
    *Eigenblutdoping: Selbstverwertung, Künstlerromantik, Partizipation*
    (Cologne: Kiepenheuer & Witsch, 2008).

    [52](#c2-note-0052a){#c2-note-0052}  Harrison Rainie and Barry Wellman,
    *Networked: The New Social Operating System* (Cambridge, MA: MIT Press,
    2012). The term is practical because it is easy to understand, but it is
    also conceptually contradictory. An individual (an indivisible entity)
    cannot be defined in terms of a distributed network. With a nod toward
    Gilles Deleuze, the cumbersome but theoretically more precise term
    "dividual" (the divisible) has also been used. See Gerald Raunig,
    "Dividuen des Facebook: Das neue Begehren nach Selbstzerteilung," in
    Oliver Leistert and Theo Röhle (eds), *Generation Facebook: Über das
    Leben im Social Net* (Bielefeld: Transcript, 2011), pp. 145--59.

    [53](#c2-note-0053a){#c2-note-0053}  Jariu Saramäki et al., "Persistence
    of Social Signatures in Human Communication," *Proceedings of the
    National Academy of Sciences of the United States of America* 111
    (2014): 942--7.

    [54](#c2-note-0054a){#c2-note-0054}  The term "weak ties" derives from a
    study of where people find out information about new jobs. As the study
    shows, this information does not usually come from close friends, whose
    level of knowledge often does not differ much from that of the person
    looking for a job, but rather from loose acquaintances, whose living
    environments do not overlap much with one\'s own and who can therefore
    make information available from outside of one\'s own network. See Mark
    Granovetter, "The Strength of Weak Ties," *American Journal of
    Sociology* 78 (1973): 1360--80.

    [55](#c2-note-0055a){#c2-note-0055}  Castells, *The Power of Identity*,
    420.

    [56](#c2-note-0056a){#c2-note-0056}  Ulf Weigelt, "Darf der Chef
    ständige Erreichbarkeit ver­langen?" *Zeit Online* (June 13, 2012),
    online \[--trans.\].[]{#Page_190 type="pagebreak" title="190"}

    [57](#c2-note-0057a){#c2-note-0057}  Hartmut Rosa, *Social Acceleration:
    A New Theory of Modernity*, trans. Jonathan Trejo-Mathys (New York:
    Columbia University Press, 2013).

    [58](#c2-note-0058a){#c2-note-0058}  This technique -- "social freezing"
    -- has already become so standard that it is now regarded as way to help
    women achieve a better balance between work and family life. See Kolja
    Rudzio "Social Freezing: Ein Kind von Apple," *Zeit Online* (November 6,
    2014), online.

    [59](#c2-note-0059a){#c2-note-0059}  See the film *Into Eternity*
    (2009), directed by Michael Madsen.

    [60](#c2-note-0060a){#c2-note-0060}  Thomas S. Kuhn, *The Structure of
    Scientific Revolutions*, 3rd edn (Chicago, IL: University of Chicago
    Press, 1996).

    [61](#c2-note-0061a){#c2-note-0061}  Werner Busch and Peter Schmoock,
    *Kunst: Die Geschichte ihrer Funktionen* (Weinheim: Quadriga/Beltz,
    1987), p. 179 \[--trans.\].

    [62](#c2-note-0062a){#c2-note-0062}  "'When Attitude Becomes Form' at
    the Fondazione Prada," *Contemporary Art Daily* (September 18, 2013),
    online.

    [63](#c2-note-0063a){#c2-note-0063}  Owing to the hyper-capitalization
    of the art market, which has been going on since the 1990s, this role
    has shifted somewhat from curators to collectors, who, though validating
    their choices more on financial than on argumentative grounds, are
    essentially engaged in the same activity. Today, leading cur­ators
    usually work closely together with collectors and thus deal with more
    money than the first generation of curators ever could have imagined.

    [64](#c2-note-0064a){#c2-note-0064}  Diedrich Diederichsen, "Showfreaks
    und Monster," *Texte zur Kunst* 71 (2008): 69--77.

    [65](#c2-note-0065a){#c2-note-0065}  Alexander R. Galloway, *Protocol:
    How Control Exists after Decentralization* (Cambridge, MA: MIT Press,
    2004), pp. 7, 75.

    [66](#c2-note-0066a){#c2-note-0066}  Even the *Frankfurter Allgemeine
    Zeitung* -- at least in its online edition -- has begun to publish more
    and more articles in English. The newspaper has accepted the
    disadvantage of higher editorial costs in order to remain relevant in
    the increasingly globalized debate.

    [67](#c2-note-0067a){#c2-note-0067}  Joseph Reagle, "'Free as in
    Sexist?' Free Culture and the Gender Gap," *First Monday* 18 (2013),
    online.

    [68](#c2-note-0068a){#c2-note-0068}  Wikipedia\'s own "Editor Survey"
    from 2011 reports a women\'s quota of 9 percent. Other studies have come
    to a slightly higher number. See Benjamin Mako Hill and Aaron Shaw, "The
    Wikipedia Gender Gap Revisited: Characterizing Survey Response Bias with
    Propensity Score Estimation," *PLOS ONE* 8 (July 26, 2013), online. The
    problem is well known, and the Wikipedia Foundation has been making
    efforts to correct matters. In 2011, its goal was to increase the
    participation of women to 25 percent by 2015. This has not been
    achieved.[]{#Page_191 type="pagebreak" title="191"}

    [69](#c2-note-0069a){#c2-note-0069}  Shyong (Tony) K. Lam et al. (2011),
    "WP: Clubhouse? An Exploration of Wikipedia's Gender Imbalance,"
    *WikiSym* 11 (2011), online.

    [70](#c2-note-0070a){#c2-note-0070}  David Singh Grewal, *Network Power:
    The Social Dynamics of Globalization* (New Haven, CT: Yale University
    Press, 2008).

    [71](#c2-note-0071a){#c2-note-0071}  Ibid., p. 29.

    [72](#c2-note-0072a){#c2-note-0072}  Niklas Luhmann, *Macht im System*
    (Berlin: Suhrkamp, 2013), p. 52 \[--trans.\].

    [73](#c2-note-0073a){#c2-note-0073}  Mathieu O\'Neil, *Cyberchiefs:
    Autonomy and Authority in Online Tribes* (London: Pluto Press, 2009).

    [74](#c2-note-0074a){#c2-note-0074}  Eric Steven Raymond, "The Cathedral
    and the Bazaar," *First Monday* 3 (1998), online.

    [75](#c2-note-0075a){#c2-note-0075}  Jorge Luis Borges, "The Library of
    Babel," trans. Anthony Kerrigan, in Borges, *Ficciones* (New York: Grove
    Weidenfeld, 1962), pp. 79--88.

    [76](#c2-note-0076a){#c2-note-0076}  Heinrich Geiselberger and Tobias
    Moorstedt (eds), *Big Data: Das neue Versprechen der Allwissenheit*
    (Berlin: Suhrkamp, 2013).

    [77](#c2-note-0077a){#c2-note-0077}  This is one of the central tenets
    of science and technology studies. See, for instance, Geoffrey C. Bowker
    and Susan Leigh Star, *Sorting Things Out: Classification and Its
    Consequences* (Cambridge, MA: MIT Press, 1999).

    [78](#c2-note-0078a){#c2-note-0078}  Sybille Krämer, *Symbolische
    Maschinen: Die Idee der Formalisierung in geschichtlichem Abriß*
    (Darmstadt: Wissenschaft­liche Buchgesellschaft, 1988), 50--69.

    [79](#c2-note-0079a){#c2-note-0079}  Quoted from Doron Swade, "The
    'Unerring Certainty of Mechanical Agency': Machines and Table Making in
    the Nineteenth Century," in Martin Campbell-Kelly et al. (eds), *The
    History of Mathematical Tables: From Sumer to Spreadsheets* (Oxford:
    Oxford University Press, 2003), pp. 145--76, at 150.

    [80](#c2-note-0080a){#c2-note-0080}  The mechanical construction
    suggested by Leibniz was not to be realized as a practically usable (and
    therefore patentable) calculating machine until 1820, by which point it
    was referred to as an "arithmometer."

    [81](#c2-note-0081a){#c2-note-0081}  Krämer, *Symbolische Maschinen*, 98
    \[--trans.\].

    [82](#c2-note-0082a){#c2-note-0082}  Charles Babbage, *On the Economy of
    Machinery and Manufactures* (London: Charles Knight, 1832), p. 153: "We
    have already mentioned what may, perhaps, appear paradoxical to some of
    our readers -- that the division of labour can be applied with equal
    success to mental operations, and that it ensures, by its adoption, the
    same economy of time."

    [83](#c2-note-0083a){#c2-note-0083}  This structure, which is known as
    "Von Neumann architecture," continues to form the basis of almost all
    computers.

    [84](#c2-note-0084a){#c2-note-0084}  "Gordon Moore Says Aloha to
    Moore\'s Law," *The Inquirer* (April 13, 2005), online.[]{#Page_192
    type="pagebreak" title="192"}

    [85](#c2-note-0085a){#c2-note-0085}  Miriam Meckel, *Next: Erinnerungen
    an eine Zukunft ohne uns* (Reinbeck bei Hamburg: Rowohlt, 2011). One
    could also say that this anxiety has been caused by the fact that the
    automation of labor has begun to affect middle-class jobs as well.

    [86](#c2-note-0086a){#c2-note-0086}  Steven Levy, "Can an Algorithm
    Write a Better News Story than a Human Reporter?" *Wired* (April 24,
    2012), online.

    [87](#c2-note-0087a){#c2-note-0087}  Alexander Pschera, *Animal
    Internet: Nature and the Digital Revolution*, trans. Elisabeth Laufer
    (New York: New Vessel Press, 2016).

    [88](#c2-note-0088a){#c2-note-0088}  The American intelligence services
    are not unique in this regard. *Spiegel* has reported that, in Russia,
    entire "bot armies" have been mobilized for the "propaganda battle."
    Benjamin Bidder, "Nemzow-Mord: Die Propaganda der russischen Hardliner,"
    *Spiegel Online* (February 28, 2015), online.

    [89](#c2-note-0089a){#c2-note-0089}  Lennart Guldbrandsson, "Swedish
    Wikipedia Surpasses 1 Million Articles with Aid of Article Creation
    Bot," [blog.wikimedia.org](http://blog.wikimedia.org) (June 17, 2013),
    online.

    [90](#c2-note-0090a){#c2-note-0090}  Thomas Bunnell, "The Mathematics of
    Film," *Boom Magazine* (November 2007): 48--51.

    [91](#c2-note-0091a){#c2-note-0091}  Christopher Steiner, "Automatons
    Get Creative," *Wall Street Journal* (August 17, 2012), online.

    [92](#c2-note-0092a){#c2-note-0092}  "The Hewlett Foundation: Automated
    Essay Scoring," [kaggle.com](http://kaggle.com) (February 10, 2012),
    online.

    [93](#c2-note-0093a){#c2-note-0093}  Ian Ayres, *Super Crunchers: How
    Anything Can Be Predicted* (London: Bookpoint, 2007).

    [94](#c2-note-0094a){#c2-note-0094}  Each of these models was tested on
    the basis of the 50 million most common search terms from the years
    2003--8 and classified according to the time and place of the search.
    The results were compared with data from the health authorities. See
    Jeremy Ginsberg et al., "Detecting Influenza Epidemics Using Search
    Engine Query Data," *Nature* 457 (2009): 1012--4.

    [95](#c2-note-0095a){#c2-note-0095}  In absolute terms, the rate of
    correct hits, at 15.8 percent, was still relatively low. With the same
    dataset, however, random guessing would only have an accuracy of 0.005
    percent. See V. Le Quoc et al., "Building High-Level Features Using
    Large-Scale Unsupervised Learning,"
    [research.google.com](http://research.google.com) (2012), online.

    [96](#c2-note-0096a){#c2-note-0096}  Neil Johnson et al., "Abrupt Rise
    of New Machine Ecology beyond Human Response Time," *Nature: Scientific
    Reports* 3 (2013), online. The authors counted 18,520 of these events
    between January 2006 and February 2011; that is, about 15 per day on
    average.

    [97](#c2-note-0097a){#c2-note-0097}  Gerald Nestler, "Mayhem in Mahwah:
    The Case of the Flash Crash; or, Forensic Re-performance in Deep Time,"
    in Anselm []{#Page_193 type="pagebreak" title="193"}Franke et al. (eds),
    *Forensis: The Architecture of Public Truth* (Berlin: Sternberg Press,
    2014), pp. 125--46.

    [98](#c2-note-0098a){#c2-note-0098}  Another facial recognition
    algorithm by Google provides a good impression of the rate of progress.
    As early as 2011, the latter was able to identify dogs in images with 80
    percent accuracy. Three years later, this rate had not only increased to
    93.5 percent (which corresponds to human capabilities), but the
    algorithm could also identify more than 200 different types of dog,
    something that hardly any person can do. See Robert McMillan, "This Guy
    Beat Google\'s Super-Smart AI -- But It Wasn\'t Easy," *Wired* (January
    15, 2015), online.

    [99](#c2-note-0099a){#c2-note-0099}  Sergey Brin and Lawrence Page, "The
    Anatomy of a Large-Scale Hypertextual Web Search Engine," *Computer
    Networks and ISDN Systems* 30 (1998): 107--17.

    [100](#c2-note-0100a){#c2-note-0100}  Eugene Garfield, "Citation Indexes
    for Science: A New Dimension in Documentation through Association of
    Ideas," *Science* 122 (1955): 108--11.

    [101](#c2-note-0101a){#c2-note-0101}  Since 1964, the data necessary for
    this has been published as the Science Citation Index (SCI).

    [102](#c2-note-0102a){#c2-note-0102}  The assumption that the subjects
    produce these structures indirectly and without any strategic intention
    has proven to be problematic in both contexts. In the world of science,
    there are so-called citation cartels -- groups of scientists who
    frequently refer to one another\'s work in order to improve their
    respective position in the SCI. Search engines have likewise given rise
    to search engine optimizers, which attempt by various means to optimize
    a website\'s evaluation by search engines.

    [103](#c2-note-0103a){#c2-note-0103}  Regarding the history of the SCI
    and its influence on the early version of Google\'s PageRank, see Katja
    Mayer, "Zur Soziometrik der Suchmaschinen: Ein historischer Überblick
    der Methodik," in Konrad Becker and Felix Stalder (eds), *Deep Search:
    Die Politik des Suchens jenseits von Google* (Innsbruck: Studienverlag,
    2009), pp. 64--83.

    [104](#c2-note-0104a){#c2-note-0104}  A site with zero links to it could
    not be registered by the algorithm at all, for the search engine indexed
    the web by having its "crawler" follow the links itself.

    [105](#c2-note-0105a){#c2-note-0105}  "Google Algorithm Change History,"
    [moz.com](http://moz.com) (2016), online.

    [106](#c2-note-0106a){#c2-note-0106}  Martin Feuz et al., "Personal Web
    Searching in the Age of Semantic Capitalism: Diagnosing the Mechanisms
    of Personalisation," *First Monday* 17 (2011), online.

    [107](#c2-note-0107a){#c2-note-0107}  Brian Dean, "Google\'s 200 Ranking
    Factors," *Search Engine Journal* (May 31, 2013), online.

    [108](#c2-note-0108a){#c2-note-0108}  Thus, it is not only the world of
    advertising that motivates the collection of personal information. Such
    information is also needed for the development of personalized
    algorithms that []{#Page_194 type="pagebreak" title="194"}give order to
    the flood of data. It can therefore be assumed that the rampant
    collection of personal information will not cease or slow down even if
    commercial demands happen to change, for instance to a business model
    that is not based on advertising.

    [109](#c2-note-0109a){#c2-note-0109}  For a detailed discussion of how
    these three levels are recorded, see Felix Stalder and Christine Mayer,
    "Der zweite Index: Suchmaschinen, Personalisierung und Überwachung," in
    Konrad Becker and Felix Stalder (eds), *Deep Search: Die Politik des
    Suchens jenseits von Google* (Innsbruck: Studienverlag, 2009), pp.
    112--31.

    [110](#c2-note-0110a){#c2-note-0110}  This raises the question of which
    drivers should be sent on a detour, so that no traffic jam comes about,
    and which should be shown the most direct route, which would now be
    traffic-free.

    [111](#c2-note-0111a){#c2-note-0111}  Pamela Vaughan, "Demystifying How
    Facebook\'s EdgeRank Algorithm Works," *HubSpot* (April 23, 2013),
    online.

    [112](#c2-note-0112a){#c2-note-0112}  Lisa Gitelman (ed.), *"Raw Data"
    Is an Oxymoron* (Cambridge, MA: MIT Press, 2013).

    [113](#c2-note-0113a){#c2-note-0113}  The terms "raw," in the sense of
    unprocessed, and "cooked," in the sense of processed, derive from the
    anthropologist Claude Lévi-Strauss, who introduced them to clarify the
    difference between nature and culture. See Claude Lévi-Strauss, *The Raw
    and the Cooked*, trans. John Weightman and Doreen Weightman (Chicago,
    IL: University of Chicago Press, 1983).

    [114](#c2-note-0114a){#c2-note-0114}  Jessica Lee, "No. 1 Position in
    Google Gets 33% of Search Traffic," *Search Engine Watch* (June 20,
    2013), online.

    [115](#c2-note-0115a){#c2-note-0115}  One estimate that continues to be
    cited quite often is already obsolete: Michael K. Bergman, "White Paper
    -- The Deep Web: Surfacing Hidden Value," *Journal of Electronic
    Publishing* 7 (2001), online. The more content is dynamically generated
    by databases, the more questionable such estimates become. It is
    uncontested, however, that only a small portion of online information is
    registered by search engines.

    [116](#c2-note-0116a){#c2-note-0116}  Theo Röhle, "Die Demontage der
    Gatekeeper: Relationale Perspektiven zur Macht der Suchmaschinen," in
    Konrad Becker and Felix Stalder (eds), *Deep Search: Die Politik des
    Suchens jenseits von Google* (Innsbruck: Studienverlag, 2009), pp.
    133--48.

    [117](#c2-note-0117a){#c2-note-0117}  The phenomenon of preparing the
    world to be recorded by algorithms is not restricted to digital
    networks. As early as 1994 in Germany, for instance, a new sort of
    typeface was introduced (the *Fälschungserschwerende Schrift*,
    "forgery-impeding typeface") on license plates for the sake of machine
    readability and facilitating automatic traffic control. To the human
    eye, however, it appears somewhat misshapen and
    disproportionate.[]{#Page_195 type="pagebreak" title="195"}

    [118](#c2-note-0118a){#c2-note-0118}  [Fairsearch.org](http://Fairsearch.org)
    was officially supported by several of Google\'s competitors, including
    Microsoft, TripAdvisor, and Oracle.

    [119](#c2-note-0119a){#c2-note-0119}  "Antitrust: Commission Sends
    Statement of Objections to Google on Comparison Shopping Service,"
    *European Commission: Press Release Database* (April 15, 2015), online.

    [120](#c2-note-0120a){#c2-note-0120}  Amit Singhal, "An Update to Our
    Search Algorithms," *Google Inside Search* (August 10, 2012), online. By
    the middle of 2014, according to some sources, Google had received
    around 20 million requests to remove links from its index on account of
    copyright violations.

    [121](#c2-note-0121a){#c2-note-0121}  Alexander Wragge, "Google-Ranking:
    Herabstufung ist 'Zensur light'," *iRights.info* (August 23, 2012),
    online.

    [122](#c2-note-0122a){#c2-note-0122}  Farhad Manjoo,"Amazon\'s Tactics
    Confirm Its Critics\' Worst Suspicions," *New York Times: Bits Blog*
    (May 23, 2014), online.

    [123](#c2-note-0123a){#c2-note-0123}  Lucas D. Introna and Helen
    Nissenbaum, "Shaping the Web: Why the Politics of Search Engines
    Matters," *Information Society* 16 (2000): 169--85, at 181.

    [124](#c2-note-0124a){#c2-note-0124}  Eli Pariser, *The Filter Bubble:
    How the New Personalized Web Is Changing What We Read and How We Think*
    (New York: Penguin, 2012).

    [125](#c2-note-0125a){#c2-note-0125}  Antoinette Rouvroy, "The End(s) of
    Critique: Data-Behaviourism vs. Due-Process," in Katja de Vries and
    Mireille Hilde­brandt (eds), *Privacy, Due Process and the Computational
    Turn: The Philosophy of Law Meets the Philosophy of Technology* (New
    York: Routledge, 2013), pp. 143--65.

    [126](#c2-note-0126a){#c2-note-0126}  See B. F. Skinner, *Science and
    Human Behavior* (New York: The Free Press, 1953), p. 35: "We undertake
    to predict and control the behavior of the individual organism. This is
    our 'dependent variable' -- the effect for which we are to find the
    cause. Our 'independent variables' -- the causes of behavior -- are the
    external conditions of which behavior is a function."

    [127](#c2-note-0127a){#c2-note-0127}  Nathan Jurgenson, "View from
    Nowhere: On the Cultural Ideology of Big Data," *New Inquiry* (October
    9, 2014), online.

    [128](#c2-note-0128a){#c2-note-0128}  danah boyd and Kate Crawford,
    "Critical Questions for Big Data: Provocations for a Cultural,
    Technological and Scholarly Phenomenon," *Information, Communication &
    Society* 15 (2012): 662--79.
    :::
    :::

    [III]{.chapterNumber} [Politics]{.chapterTitle} {#c3}
    ::: {.section}
    The show had already been going on for more than three hours, but nobody
    was bothered by this. Quite the contrary. The tension in the venue was
    approaching its peak, and the ratings were through the roof. Throughout
    all of Europe, 195 million people were watching the spectacle on
    television, and the social mass media were gaining steam. On Twitter,
    more than 47,000 messages were being sent every minute with the hashtag
    \#Eurovision.[^1^](#f6-note-0001){#f6-note-0001a} The outcome was
    decided shortly after midnight: Conchita Wurst, the bearded diva, was
    announced the winner of the 2014 Eurovision Song Contest. Cheers erupted
    as the public celebrated the victor -- but also itself. At long last,
    there was more to the event than just another round of tacky television
    programming ("This is Ljubljana calling!"). Rather, a statement was made
    -- a statement in favor of tolerance and against homophobia, for
    diversity and for the right to define oneself however one pleases. And
    Europe sent this message in the midst of a crisis and despite ongoing
    hostilities, not to mention all of the toxic rumblings that could be
    heard about decadence, cultural decay, and Gayropa. Visibly moved, the
    Austrian singer let out an exclamation -- "We are unity, and we are
    unstoppable!" -- as she returned to the stage with wobbly knees to
    accept the trophy.

    With her aesthetically convincing performance, Conchita succeeded in
    unleashing a strong desire for personal []{#Page_1 type="pagebreak"
    title="1"}self-discovery, for community, and for overcoming stale
    conventions. And she did this through a character that mainstream
    society would have considered paradoxical and deviant not long ago but
    has since come to understand: attractive beyond the dichotomy of man and
    woman, explicitly artificial and yet entirely authentic. This peculiar
    conflation of artificiality and naturalness is equally present in
    Berndnaut Smilde\'s photographic work of a real indoor cloud (*Nimbus*,
    2010) on the cover of this book. Conchita\'s performance was also on a
    formal level seemingly paradoxical: extremely focused and completely
    open. Unlike most of the other acts, she took the stage alone, and
    though she hardly moved at all, she nevertheless incited the audience to
    participate in numerous ways and genuinely to act out the motto of the
    contest ("Join us!"). Throughout the early rounds of the competition,
    the beard, which was at first so provocative, transformed into a
    free-floating symbol that the public began to appropriate in various
    ways. Men and women painted Conchita-like beards on their faces,
    newspapers printed beards to be cut out, and fans crocheted beards. Not
    only did someone Photoshop a beard on to a painting of Empress Sissi of
    Austria, but King Willem-Alexander of the Netherlands even tweeted a
    deceptively realistic portrait of his wife, Queen Máxima, wearing a
    beard. From one of the biggest stages of all, the evening of Wurst\'s
    victory conveyed an impression of how much the culture of Europe had
    changed in recent years, both in terms of its content and its forms.
    That which had long been restricted to subcultural niches -- the
    fluidity of gender iden­tities, appropriation as a cultural technique,
    or the conflation of reception and production, for instance -- was now
    part of the mainstream. Even while sitting in front of the television,
    this mainstream was no longer just a private audience but rather a
    multitude of singular producers whose networked activity -- on location
    or on social mass media -- lent particular significance to the occasion
    as a moment of collective self-perception.

    It is more than half a century since Marshall McLuhan announced the end
    of the Modern era, a cultural epoch that he called the Gutenberg Galaxy
    in honor of the print medium by which it was so influenced. What was
    once just an abstract speculation of media theory, however, now
    describes []{#Page_2 type="pagebreak" title="2"}the concrete reality of
    our everyday life. What\'s more, we have moved well past McLuhan\'s
    diagnosis: the erosion of old cultural forms, institutions, and
    certainties is not just something we affirm, but new ones have already
    formed whose contours are easy to identify not only in niche sectors but
    in the mainstream. Shortly before Conchita\'s triumph, Facebook thus
    expanded the gender-identity options for its billion-plus users from 2
    to 60. In addition to "male" and "female," users of the English version
    of the site can now choose from among the following categories:

    ::: {.extract}
    Agender, Androgyne, Androgynes, Androgynous, Asexual, Bigender, Cis, Cis
    Female, Cis Male, Cis Man, Cis Woman, Cisgender, Cisgender Female,
    Cisgender Male, Cisgender Man, Cisgender Woman, Female to Male (FTM),
    Female to Male Trans Man, Female to Male Transgender Man, Female to Male
    Transsexual Man, Gender Fluid, Gender Neutral, Gender Nonconforming,
    Gender Questioning, Gender Variant, Genderqueer, Hermaphrodite,
    Intersex, Intersex Man, Intersex Person, Intersex Woman, Male to Female
    (MTF), Male to Female Trans Woman, Male to Female Transgender Woman,
    Male to Female Transsexual Woman, Neither, Neutrois, Non-Binary, Other,
    Pangender, Polygender, T\*Man, Trans, Trans Female, Trans Male, Trans
    Man, Trans Person, Trans\*Female, Trans\*Male, Trans\*Man,
    Trans\*Person, Trans\*Woman, Transexual, Transexual Female, Transexual
    Male, Transexual Man, Transexual Person, Transexual Woman, Transgender
    Female, Transgender Person, Transmasculine, T\*Woman, Two\*Person,
    Two-Spirit, Two-Spirit Person.
    :::

    This enormous proliferation of cultural possibilities is an expression
    of what I will refer to below as the digital condition. Far from being
    universally welcomed, its growing presence has also instigated waves of
    nostalgia, diffuse resentments, and intellectual panic. Conservative and
    reactionary movements, which oppose such developments and desire to
    preserve or even re-create previous conditions, have been on the rise.
    Likewise in 2014, for instance, a cultural dispute broke out in normally
    subdued Baden-Würtemberg over which forms of sexual partnership should
    be mentioned positively in the sexual education curriculum. Its impetus
    was a working paper released at the end of 2013 by the state\'s
    []{#Page_3 type="pagebreak" title="3"}Ministry of Culture. Among other
    things, it proposed that adolescents "should confront their own sexual
    identity and orientation \[...\] from a position of acceptance with
    respect to sexual diversity."[^2^](#f6-note-0002){#f6-note-0002a} In a
    short period of time, a campaign organized mainly through social mass
    media collected more than 200,000 signatures in opposition to the
    proposal and submitted them to the petitions committee at the state
    parliament. At that point, the government responded by putting the
    initiative on ice. However, according to the analysis presented in this
    book, leaving it on ice creates a precarious situation.

    The rise and spread of the digital condition is the result of a
    wide-ranging and irreversible cultural transformation, the beginnings of
    which can in part be traced back to the nineteenth century. Since the
    1960s, however, this shift has accelerated enormously and has
    encompassed increasingly broader spheres of social life. More and more
    people have been participating in cultural processes; larger and larger
    dimensions of existence have become battlegrounds for cultural disputes;
    and social activity has been intertwined with increasingly complex
    technologies, without which it would hardly be possible to conceive of
    these processes, let alone achieve them. The number of competing
    cultural projects, works, reference points, and reference systems has
    been growing rapidly. This, in turn, has caused an escalating crisis for
    the established forms and institutions of culture, which are poorly
    equipped to deal with such an inundation of new claims to meaning. Since
    roughly the year 2000, many previously independent developments have
    been consolidating, gaining strength and modifying themselves to form a
    new cultural constellation that encompasses broad segments of society --
    a new galaxy, as McLuhan might have
    said.[^3^](#f6-note-0003){#f6-note-0003a} These days it is relatively
    easy to recognize the specific forms that characterize it as a whole and
    how these forms have contributed to new, contradictory and
    conflict-laden political dynamics.

    My argument, which is restricted to cultural developments in the
    (transatlantic) West, is divided into three chapters. In the first, I
    will outline the *historical* developments that have given rise to this
    quantitative and qualitative change and have led to the crisis faced by
    the institutions of the late phase of the Gutenberg Galaxy, which
    defined the last third []{#Page_4 type="pagebreak" title="4"}of the
    twentieth century.[^4^](#f6-note-0004){#f6-note-0004a} The expansion of
    the social basis of cultural processes will be traced back to changes in
    the labor market, to the self-empowerment of marginalized groups, and to
    the dissolution of centralized cultural geography. The broadening of
    cultural fields will be discussed in terms of the rise of design as a
    general creative discipline, and the growing significance of complex
    technologies -- as fundamental components of everyday life -- will be
    tracked from the beginnings of independent media up to the development
    of the internet as a mass medium. These processes, which at first
    unfolded on their own and may have been reversible on an individual
    basis, are integrated today and represent a socially domin­ant component
    of the coherent digital condition. From the perspective of cultural
    studies and media theory, the second chapter will delineate the already
    recognizable features of this new culture. Concerned above all with the
    analysis of forms, its focus is thus on the question of "how" cultural
    practices operate. It is only because specific forms of culture,
    exchange, and expression are prevalent across diverse var­ieties of
    content, social spheres, and locations that it is even possible to speak
    of the digital condition in the singular. Three examples of such forms
    stand out in particular. *Referentiality* -- that is, the use of
    existing cultural materials for one\'s own production -- is an essential
    feature of many methods for inscribing oneself into cultural processes.
    In the context of unmanageable masses of shifting and semantically open
    reference points, the act of selecting things and combining them has
    become fundamental to the production of meaning and the constitution of
    the self. The second feature that characterizes these processes is
    *communality*. It is only through a collectively shared frame of
    reference that meanings can be stabilized, possible courses of action
    can be determined, and resources can be made available. This has given
    rise to communal formations that generate self-referential worlds, which
    in turn modulate various dimensions of existence -- from aesthetic
    preferences to the methods of biological reproduction and the rhythms of
    space and time. In these worlds, the dynamics of network power have
    reconfigured notions of voluntary and involuntary behavior, autonomy,
    and coercion. The third feature of the new cultural landscape is its
    *algorithmicity*. It is characterized, in other []{#Page_5
    type="pagebreak" title="5"}words, by automated decision-making processes
    that reduce and give shape to the glut of information, by extracting
    information from the volume of data produced by machines. This extracted
    information is then accessible to human perception and can serve as the
    basis of singular and communal activity. Faced with the enormous amount
    of data generated by people and machines, we would be blind were it not
    for algorithms.

    The third chapter will focus on *political dimensions*. These are the
    factors that enable the formal dimensions described in the preceding
    chapter to manifest themselves in the form of social, political, and
    economic projects. Whereas the first chapter is concerned with long-term
    and irreversible histor­ical processes, and the second outlines the
    general cultural forms that emerged from these changes with a certain
    degree of inevitability, my concentration here will be on open-ended
    dynamics that can still be influenced. A contrast will be made between
    two political tendencies of the digital condition that are already quite
    advanced: *post-democracy* and *commons*. Both take full advantage of
    the possibilities that have arisen on account of structural changes and
    have advanced them even further, though in entirely different
    directions. "Post-democracy" refers to strategies that counteract the
    enormously expanded capacity for social communication by disconnecting
    the possibility to participate in things from the ability to make
    decisions about them. Everyone is allowed to voice his or her opinion,
    but decisions are ultimately made by a select few. Even though growing
    numbers of people can and must take responsibility for their own
    activity, they are unable to influence the social conditions -- the
    social texture -- under which this activity has to take place. Social
    mass media such as Facebook and Google will receive particular attention
    as the most conspicuous manifestations of this tendency. Here, under new
    structural provisions, a new combination of behavior and thought has
    been implemented that promotes the normalization of post-democracy and
    contributes to its otherwise inexplicable acceptance in many areas of
    society. "Commons," on the contrary, denotes approaches for developing
    new and comprehensive institutions that not only directly combine
    participation and decision-making but also integrate economic, social,
    and ethical spheres -- spheres that Modernity has tended to keep
    apart.[]{#Page_6 type="pagebreak" title="6"}

    Post-democracy and commons can be understood as two lines of development
    that point beyond the current crisis of liberal democracy and represent
    new political projects. One can be characterized as an essentially
    authoritarian system, the other as a radical expansion and renewal of
    democracy, from the notion of representation to that of participation.

    Even though I have brought together a number of broad perspectives, I
    have refrained from discussing certain topics that a book entitled *The
    Digital Condition* might be expected to address, notably the matter of
    copyright, for one example. This is easy to explain. As regards the new
    forms at the heart of this book, none of these developments requires or
    justifies copyright law in its present form. In any case, my thoughts on
    the matter were published not long ago in another book, so there is no
    need to repeat them here.[^5^](#f6-note-0005){#f6-note-0005a} The theme
    of privacy will also receive little attention. This is not because I
    share the view, held by proponents of "post-privacy," that it would be
    better for all personal information to be made available to everyone. On
    the contrary, this position strikes me as superficial and naïve. That
    said, the political function of privacy -- to safeguard a degree of
    personal autonomy from powerful institutions -- is based on fundamental
    concepts that, in light of the developments to be described below,
    urgently need to be updated. This is a task, however, that would take me
    far beyond the scope of the present
    book.[^6^](#f6-note-0006){#f6-note-0006a}

    Before moving on to the first chapter, I should first briefly explain my
    somewhat unorthodox understanding of the central concepts in the title
    of the book -- "condition" and "digital." In what follows, the term
    "condition" will be used to designate a cultural condition whereby the
    processes of social meaning -- that is, the normative dimension of
    existence -- are explicitly or implicitly negotiated and realized by
    means of singular and collective activity. Meaning, however, does not
    manifest itself in signs and symbols alone; rather, the practices that
    engender it and are inspired by it are consolidated into artifacts,
    institutions, and lifeworlds. In other words, far from being a symbolic
    accessory or mere overlay, culture in fact directs our actions and gives
    shape to society. By means of materialization and repetition, meaning --
    both as claim and as reality -- is made visible, productive, and
    negotiable. People are free to accept it, reject it, or ignore
    []{#Page_7 type="pagebreak" title="7"}it altogether. Social meaning --
    that is, meaning shared by multiple people -- can only come about
    through processes of exchange within larger or smaller formations.
    Production and reception (to the extent that it makes any sense to
    distinguish between the two) do not proceed linearly here, but rather
    loop back and reciprocally influence one another. In such processes, the
    participants themselves determine, in a more or less binding manner, how
    they stand in relation to themselves, to each other, and to the world,
    and they determine the frame of reference in which their activity is
    oriented. Accordingly, culture is not something static or something that
    is possessed by a person or a group, but rather a field of dispute that
    is subject to the activities of multiple ongoing changes, each happening
    at its own pace. It is characterized by processes of dissolution and
    constitution that may be collaborative, oppositional, or simply
    operating side by side. The field of culture is pervaded by competing
    claims to power and mechanisms for exerting it. This leads to conflicts
    about which frames of reference should be adopted for different fields
    and within different social groups. In such conflicts,
    self-determination and external determination interact until a point is
    reached at which both sides are mutually constituted. This, in turn,
    changes the conditions that give rise to shared meaning and personal
    identity.

    In what follows, this broadly post-structuralist perspective will inform
    my discussion of the causes and formational conditions of cultural
    orders and their practices. Culture will be conceived throughout as
    something heterogeneous and hybrid. It draws from many sources; it is
    motivated by the widest possible variety of desires, intentions, and
    compulsions; and it mobilizes whatever resources might be necessary for
    the constitution of meaning. This emphasis on the materiality of culture
    is also reflected in the concept of the digital. Media are relational
    technologies, which means that they facilitate certain types of
    connection between humans and
    objects.[^7^](#f6-note-0007){#f6-note-0007a} "Digital" thus denotes the
    set of relations that, on the infrastructural basis of digital networks,
    is realized today in the production, use, and transform­ation of
    material and immaterial goods, and in the constitution and coordination
    of personal and collective activity. In this regard, the focus is less
    on the dominance of a certain class []{#Page_8 type="pagebreak"
    title="8"}of technological artifacts -- the computer, for instance --
    and even less on distinguishing between "digital" and "analog,"
    "material" and "immaterial." Even in the digital condition, the analog
    has not gone away. Rather, it has been re-evaluated and even partially
    upgraded. The immaterial, moreover, is never entirely without
    materiality. On the contrary, the fleeting impulses of digital
    communication depend on global and unmistakably material infrastructures
    that extend from mines beneath the surface of the earth, from which rare
    earth metals are extracted, all the way into outer space, where
    satellites are circling around above us. Such things may be ignored
    because they are outside the experience of everyday life, but that does
    not mean that they have disappeared or that they are of any less
    significance. "Digital" thus refers to historically new possibilities
    for constituting and connecting various human and non-human actors,
    which is not limited to digital media but rather appears everywhere as a
    relational paradigm that alters the realm of possibility for numerous
    materials and actors. My understanding of the digital thus approximates
    the concept of the "post-digital," which has been gaining currency over
    the past few years within critical media cultures. Here, too, the
    distinction between "new" and "old" media and all of the ideological
    baggage associated with it -- for instance, that the new represents the
    future while the old represents the past -- have been rejected. The
    aesthetic projects that continue to define the image of the "digital" --
    immateriality, perfection, and virtuality -- have likewise been
    discarded.[^8^](#f6-note-0008){#f6-note-0008a} Above all, the
    "post-digital" is a critical response to this techno-utopian aesthetic
    and its attendant economic and political perspectives. According to the
    cultural theorist Florian Cramer, the concept accommodates the fact that
    "new ethical and cultural conventions which became mainstream with
    internet communities and open-source culture are being retroactively
    applied to the making of non-digital and post-digital media
    products."[^9^](#f6-note-0009){#f6-note-0009a} He thus cites the trend
    that process-based practices oriented toward open interaction, which
    first developed within digital media, have since begun to appear in more
    and more contexts and in an increasing number of
    materials.[^10[]{#Page_9 type="pagebreak"
    title="9"}^](#f6-note-0010){#f6-note-0010a}

    For the historical, cultural-theoretical, and political perspectives
    developed in this book, however, the concept of the post-digital is
    somewhat problematic, for it requires the narrow context of media art
    and its fixation on technology in order to become a viable
    counter-position. Without this context, certain misunderstandings are
    impossible to avoid. The prefix "post-," for instance, is often
    interpreted in the sense that something is over or that we have at least
    grasped the matters at hand and can thus turn to something new. The
    opposite is true. The most enduringly relevant developments are only now
    beginning to adopt a specific form, long after digital infrastructures
    and the practices made popular by them have become part of our everyday
    lives. Or, as the communication theorist and consultant Clay Shirky puts
    it, "Communication tools don\'t get socially interesting until they get
    technologically boring."[^11^](#f6-note-0011){#f6-note-0011a} For it is
    only today, now that our fascination for this technology has waned and
    its promises sound hollow, that culture and society are being defined by
    the digital condition in a comprehensive sense. Before, this was the
    case in just a few limited spheres. It is this hybridization and
    solidification of the digital -- the presence of the digital beyond
    digital media -- that lends the digital condition its dominance. As to
    the concrete realities in which these things will materialize, this is
    currently being decided in an open and ongoing process. The aim of this
    book is to contribute to our understanding of this process.[]{#Page_10
    type="pagebreak" title="10"}
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#f6-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#f6-note-0001a){#f6-note-0001}  Dan Biddle, "Five Million Tweets for
    \#Eurovision 2014," *Twitter UK* (May 11, 2014), online.

    [2](#f6-note-0002a){#f6-note-0002}  Ministerium für Kultus, Jugend und
    Sport -- Baden-Württemberg, "Bildungsplanreform 2015/2016 -- Verankerung
    von Leitprinzipien," online \[--trans.\].

    [3](#f6-note-0003a){#f6-note-0003}  As early as 1995, Wolfgang Coy
    suggested that McLuhan\'s metaphor should be supplanted by the concept
    of the "Turing Galaxy," but this never caught on. See his introduction
    to the German edition of *The Gutenberg Galaxy*: "Von der Gutenbergschen
    zur Turingschen Galaxis: Jenseits von Buchdruck und Fernsehen," in
    Marshall McLuhan, *Die Gutenberg Galaxis: Das Ende des Buchzeitalters*,
    (Cologne: Addison-Wesley, 1995), pp. vii--xviii.[]{#Page_176
    type="pagebreak" title="176"}

    [4](#f6-note-0004a){#f6-note-0004}  According to the analysis of the
    Spanish sociologist Manuel Castells, this crisis began almost
    simultaneously in highly developed capitalist and socialist societies,
    and it did so for the same reason: the paradigm of "industrialism" had
    reached the limits of its productivity. Unlike the capitalist societies,
    which were flexible enough to tame the crisis and reorient their
    economies, the socialism of the 1970s and 1980s experienced stagnation
    until it ultimately, in a belated effort to reform, collapsed. See
    Manuel Castells, *End of Millennium*, 2nd edn (Oxford: Wiley-Blackwell,
    2010), pp. 5--68.

    [5](#f6-note-0005a){#f6-note-0005}  Felix Stalder, *Der Autor am Ende
    der Gutenberg Galaxis* (Zurich: Buch & Netz, 2014).

    [6](#f6-note-0006a){#f6-note-0006}  For my preliminary thoughts on this
    topic, see Felix Stalder, "Autonomy and Control in the Era of
    Post-Privacy," *Open: Cahier on Art and the Public Domain* 19 (2010):
    78--86; and idem, "Privacy Is Not the Antidote to Surveillance,"
    *Surveillance & Society* 1 (2002): 120--4. For a discussion of these
    approaches, see the working paper by Maja van der Velden, "Personal
    Autonomy in a Post-Privacy World: A Feminist Technoscience Perspective"
    (2011), online.

    [7](#f6-note-0007a){#f6-note-0007}  Accordingly, the "new social" media
    are mass media in the sense that they influence broadly disseminated
    patterns of social relations and thus shape society as much as the
    traditional mass media had done before them.

    [8](#f6-note-0008a){#f6-note-0008}  Kim Cascone, "The Aesthetics of
    Failure: 'Post-Digital' Tendencies in Contemporary Computer Music,"
    *Computer Music Journal* 24/2 (2000): 12--18.

    [9](#f6-note-0009a){#f6-note-0009}  Florian Cramer, "What Is
    'Post-Digital'?" *Post-Digital Research* 3 (2014), online.

    [10](#f6-note-0010a){#f6-note-0010}  In the field of visual arts,
    similar considerations have been made regarding "post-internet art." See
    Artie Vierkant, "The Image Object Post-Internet,"
    [jstchillin.org](http://jstchillin.org) (December 2010), online; and Ian
    Wallace, "What Is Post-Internet Art? Understanding the Revolutionary New
    Art Movement," *Artspace* (March 18, 2014), online.

    [11](#f6-note-0011a){#f6-note-0011}  Clay Shirky, *Here Comes Everybody:
    The Power of Organizing without Organizations* (New York: Penguin,
    2008), p. 105.
    :::
    :::

    [I]{.chapterNumber} [Evolution]{.chapterTitle} {#c1}
    =
    ::: {.section}
    Many authors have interpreted the new cultural realities that
    characterize our daily lives as a direct consequence of technological
    developments: the internet is to blame! This assumption is not only
    empirically untenable; it also leads to a problematic assessment of the
    current situation. Apparatuses are represented as "central actors," and
    this suggests that new technologies have suddenly revolutionized a
    situation that had previously been stable. Depending on one\'s point of
    view, this is then regarded as "a blessing or a
    curse."[^1^](#c1-note-0001){#c1-note-0001a} A closer examination,
    however, reveals an entirely different picture. Established cultural
    practices and social institutions had already been witnessing the
    erosion of their self-evident justification and legitimacy, long before
    they were faced with new technologies and the corresponding demands
    these make on individuals. Moreover, the allegedly new types of
    coordination and cooperation are also not so new after all. Many of them
    have existed for a long time. At first most of them were totally
    separate from the technologies for which, later on, they would become
    relevant. It is only in retrospect that these developments can be
    identified as beginnings, and it can be seen that much of what we regard
    today as novel or revolutionary was in fact introduced at the margins of
    society, in cultural niches that were unnoticed by the dominant actors
    and institutions. The new technologies thus evolved against a
    []{#Page_11 type="pagebreak" title="11"}background of processes of
    societal transformation that were already under way. They could only
    have been developed once a vision of their potential had been
    formulated, and they could only have been disseminated where demand for
    them already existed. This demand was created by social, political, and
    economic crises, which were themselves initiated by changes that were
    already under way. The new technologies seemed to provide many differing
    and promising answers to the urgent questions that these crises had
    prompted. It was thus a combination of positive vision and pressure that
    motivated a great variety of actors to change, at times with
    considerable effort, the established processes, mature institutions, and
    their own behavior. They intended to appropriate, for their own
    projects, the various and partly contradictory possibilities that they
    saw in these new technologies. Only then did a new technological
    infrastructure arise.

    This, in turn, created the preconditions for previously independent
    developments to come together, strengthening one another and enabling
    them to spread beyond the contexts in which they had originated. Thus,
    they moved from the margins to the center of culture. And by
    intensifying the crisis of previously established cultural forms and
    institutions, they became dominant and established new forms and
    institutions of their own.
    :::

    ::: {.section}
    The Expansion of the Social Basis of Culture {#c1-sec-0002}
    --------------------------------------------

    Watching television discussions from the 1950s and 1960s today, one is
    struck not only by the billows of cigarette smoke in the studio but also
    by the homogeneous spectrum of participants. Usually, it was a group of
    white and heteronormatively behaving men speaking with one
    another,[^2^](#c1-note-0002){#c1-note-0002a} as these were the people
    who held the important institutional positions in the centers of the
    West. As a rule, those involved were highly specialized representatives
    from the cultural, economic, scientific, and political spheres. Above
    all, they were legitimized to appear in public to articulate their
    opinions, which were to be regarded by others as relevant and worthy of
    discussion. They presided over the important debates of their time. With
    few exceptions, other actors and their deviant opinions -- there
    []{#Page_12 type="pagebreak" title="12"}has never been a time without
    them -- were either not taken seriously at all or were categorized as
    indecent, incompetent, perverse, irrelevant, backward, exotic, or
    idiosyncratic.[^3^](#c1-note-0003){#c1-note-0003a} Even at that time,
    the social basis of culture was beginning to expand, though the actors
    at the center of the discourse had failed to notice this. Communicative
    and cultural pro­cesses were gaining significance in more and more
    places, and excluded social groups were self-consciously developing
    their own language in order to intervene in the discourse. The rise of
    the knowledge economy, the increasingly loud critique of
    heteronormativity, and a fundamental cultural critique posed by
    post-colonialism enabled a greater number of people to participate in
    public discussions. In what follows, I will subject each of these three
    phenomena to closer examin­ation. In order to do justice to their
    complexity, I will treat them on different levels: I will depict the
    rise of the knowledge economy as a structural change in labor; I will
    reconstruct the critique of heteronormativity by outlining the origins
    and transformations of the gay movement in West Germany; and I will
    discuss post-colonialism as a theory that introduced new concepts of
    cultural multiplicity and hybridization -- concepts that are now
    influencing the digital condition far beyond the limits of the
    post-colonial discourse, and often without any reference to this
    discourse at all.

    ::: {.section}
    ### The growth of the knowledge economy {#c1-sec-0003}

    At the beginning of the 1950s, the Austrian-American economist Fritz
    Machlup was immersed in his study of the polit­ical economy of
    monopoly.[^4^](#c1-note-0004){#c1-note-0004a} Among other things, he was
    concerned with patents and copyright law. In line with the neo-classical
    Austrian School, he considered both to be problematic (because
    state-created) monopolies.[^5^](#c1-note-0005){#c1-note-0005a} The
    longer he studied the monopoly of the patent system in particular, the
    more far-reaching its consequences seemed to him. He maintained that the
    patent system was intertwined with something that might be called the
    "economy of invention" -- ultimately, patentable insights had to be
    produced in the first place -- and that this was in turn part of a much
    larger economy of knowledge. The latter encompassed government agencies
    as well as institutions of education, research, and development
    []{#Page_13 type="pagebreak" title="13"}(that is, schools, universities,
    and certain corporate laboratories), which had been increasing steadily
    in number since Roosevelt\'s New Deal. Yet it also included the
    expanding media sector and those industries that were responsible for
    providing technical infrastructure. Machlup subsumed all of these
    institutions and sectors under the concept of the "knowledge economy," a
    term of his own invention. Their common feature was that essential
    aspects of their activities consisted in communicating things to other
    people ("telling anyone anything," as he put it). Thus, the employees
    were not only recipients of information or instructions; rather, in one
    way or another, they themselves communicated, be it merely as a
    secretary who typed up, edited, and forwarded a piece of shorthand
    dictation. In his book *The Production and Distribution of Knowledge in
    the United States*, published in 1962, Machlup gathered empirical
    material to demonstrate that the American economy had entered a new
    phase that was distinguished by the production, exchange, and
    application of abstract, codified
    knowledge.[^6^](#c1-note-0006){#c1-note-0006a} This opinion was no
    longer entirely novel at the time, but it had never before been
    presented in such an empirically detailed and comprehensive
    manner.[^7^](#c1-note-0007){#c1-note-0007a} The extent of the knowledge
    economy surprised Machlup himself: in his book, he concluded that as
    much as 43 percent of all labor activity was already engaged in this
    sector. This high number came about because, until then, no one had put
    forward the idea of understanding such a variety of activities as a
    single unit.

    Machlup\'s categorization was indeed quite innovative, for the dynamics
    that propelled the sectors that he associated with one another not only
    were very different but also had originated as an integral component in
    the development of the industrial production of goods. They were more of
    an extension of such production than a break with it. The production and
    circulation of goods had been expanding and accelerating as early as the
    nineteenth century, though at highly divergent rates from one region or
    sector to another. New markets were created in order to distribute goods
    that were being produced in greater numbers; new infrastructure for
    transportation and communication was established in order to serve these
    large markets, which were mostly in the form of national territories
    (including their colonies). This []{#Page_14 type="pagebreak"
    title="14"}enabled even larger factories to be built in order to
    exploit, to an even greater extent, the cost advantages of mass
    production. In order to control these complex processes, new professions
    arose with different types of competencies and working conditions. The
    office became a workplace for an increasing number of people -- men and
    women alike -- who, in one form or another, had something to do with
    information processing and communication. Yet all of this required not
    only new management techniques. Production and products also became more
    complex, so that entire corporate sectors had to be restructured.
    Whereas the first decisive inventions of the industrial era were still
    made by more or less educated tinkerers, during the last third of the
    nineteenth century, invention itself came to be institutionalized. In
    Germany, Siemens (founded in 1847 as the Telegraphen-Bauanstalt von
    Siemens & Halske) exemplifies this transformation. Within 50 years, a
    company that began in a proverbial workshop in a Berlin backyard became
    a multinational high-tech corporation. It was in such corporate
    laboratories, which were established around the year 1900, that the
    "industrialization of invention" or the "scientification of industrial
    production" took place.[^8^](#c1-note-0008){#c1-note-0008a} In other
    words, even the processes employed in factories and the goods that they
    produced became knowledge-intensive. Their invention, planning, and
    production required a steadily growing expansion of activities, which
    today we would refer to as research and development. The informatization
    of the economy -- the acceleration of mass production, the comprehensive
    application of scientific methods to the organization of labor, and the
    central role of research and development in industry -- was hastened
    enormously by a world war that was waged on an industrial scale to an
    extent that had never been seen before.

    Another important factor for the increasing significance of the
    knowledge economy was the development of the consumer society. Over the
    course of the last third of the nineteenth century, despite dramatic
    regional and social disparities, an increasing number of people profited
    from the economic growth that the Industrial Revolution had instigated.
    Wages increased and basic needs were largely met, so that a new social
    stratum arose, the middle class, which was able to spend part of its
    income on other things. But on what? First, []{#Page_15 type="pagebreak"
    title="15"}new needs had to be created. The more production capacities
    increased, the more they had to be rethought in terms of consumption.
    Thus, in yet another way, the economy became more knowledge-intensive.
    It was now necessary to become familiar with, understand, and stimulate
    the interests and preferences of consumers, in order to entice them to
    purchase products that they did not urgently need. This knowledge did
    little to enhance the material or logistical complexity of goods or
    their production; rather, it was reflected in the increasingly extensive
    communication about and through these goods. The beginnings of this
    development were captured by Émile Zola in his 1883 novel *The Ladies\'
    Paradise*, which was set in the new world of a semi-fictitious
    department store bearing that name. In its opening scene, the young
    protagonist Denise Baudu and her brother Jean, both of whom have just
    moved to Paris from a provincial town, encounter for the first time the
    artfully arranged women\'s clothing -- exhibited with all sorts of
    tricks involving lighting, mirrors, and mannequins -- in the window
    displays of the store. The sensuality of the staged goods is so
    overwhelming that both of them are not only struck dumb, but Jean even
    "blushes."

    It was the economy of affects that brought blood to Jean\'s cheeks. At
    that time, strategies for attracting the attention of customers did not
    yet have a scientific and systematic basis. Just as the first inventions
    in the age of industrialization were made by amateurs, so too was the
    economy of affects developed intuitively and gradually rather than as a
    planned or conscious paradigm shift. That it was possible to induce and
    direct affects by means of targeted communication was the pioneering
    discovery of the Austrian-American Edward Bernays. During the 1920s, he
    combined the ideas of his uncle Sigmund Freud about unconscious
    motivations with the sociological research methods of opinion surveys to
    form a new discipline: market
    research.[^9^](#c1-note-0009){#c1-note-0009a} It became the scientific
    basis of a new field of activity, which he at first called "propa­ganda"
    but then later referred to as "public
    relations."[^10^](#c1-note-0010){#c1-note-0010a} Public communication,
    be it for economic or political ends, was now placed on a systematic
    foundation that came to distance itself more and more from the pure
    "conveyance of information." Communication became a strategic field for
    corporate and political disputes, and the mass media []{#Page_16
    type="pagebreak" title="16"}became their locus of negotiation. Between
    1880 and 1917, for instance, commercial advertising costs in the United
    States increased by more than 800 percent, and the leading advertising
    firms, using the same techniques with which they attracted consumers to
    products, were successful in selling to the American public the idea of
    their nation entering World War I. Thus, a media industry in the modern
    sense was born, and it expanded along with the rapidly growing market
    for advertising.[^11^](#c1-note-0011){#c1-note-0011a}

    In his studies of labor markets conducted at the beginning of the 1960s,
    Machlup brought these previously separ­ate developments together and
    thus explained the existence of an already advanced knowledge economy in
    the United States. His arguments fell on extremely fertile soil, for an
    intellectual transformation had taken place in other areas of science as
    well. A few years earlier, for instance, cybernetics had given the
    concepts "information" and "communication" their first scientifically
    precise (if somewhat idiosyncratic) definitions and had assigned to them
    a position of central importance in all scientific disciplines, not to
    mention life in general.[^12^](#c1-note-0012){#c1-note-0012a} Machlup\'s
    investigation seemed to confirm this in the case of the economy, given
    that the knowledge economy was primarily concerned with information and
    communication. Since then, numerous analyses, formulas, and slogans have
    repeated, modified, refined, and criticized the idea that the
    knowledge-based activities of the economy have become increasingly
    important. In the 1970s this discussion was associated above all with
    the notion of the "post-industrial
    society,"[^13^](#c1-note-0013){#c1-note-0013a} in the 1980s the guiding
    idea was the "information society,"[^14^](#c1-note-0014){#c1-note-0014a}
    and in the 1990s the debate revolved around the "network
    society"[^15^](#c1-note-0015){#c1-note-0015a} -- to name just the most
    popular concepts. What these approaches have in common is that they each
    diagnose a comprehensive societal transformation that, as regards the
    creation of economic value or jobs, has shifted the balance from
    productive to communicative activ­ities. Accordingly, they presuppose
    that we know how to distinguish the former from the latter. This is not
    unproblematic, however, because in practice the two are usually tightly
    intertwined. Moreover, whoever maintains that communicative activities
    have taken the place of industrial production in our society has adopted
    a very narrow point of []{#Page_17 type="pagebreak" title="17"}view.
    Factory jobs have not simply disappeared; they have just been partially
    relocated outside of Western economies. The assertion that communicative
    activities are somehow of "greater value" hardly chimes with the reality
    of today\'s new "service jobs," many of which pay no more than the
    minimum wage.[^16^](#c1-note-0016){#c1-note-0016a} Critiques of this
    sort, however, have done little to reduce the effectiveness of this
    analysis -- especially its political effectiveness -- for it does more
    than simply describe a condition. It also contains a set of political
    instructions that imply or directly demand that precisely those sectors
    should be promoted that it considers economically promising, and that
    society should be reorganized accordingly. Since the 1970s, there has
    thus been a feedback loop between scientific analysis and political
    agendas. More often than not, it is hardly possible to distinguish
    between the two. Especially in Britain and the United States, the
    economic transformation of the 1980s was imposed insistently and with
    political calculation (the weakening of labor unions).

    There are, however, important differences between the developments of
    the so-called "post-industrial society" of the 1970s and those of the
    so-called "network society" of the 1990s, even if both terms are
    supposed to stress the increased significance of information, knowledge,
    and communication. With regard to the digital condition, the most
    important of these differences are the greater flexibility of economic
    activity in general and employment relations in particular, as well as
    the dismantling of social security systems. Neither phenomenon played
    much of a role in analyses of the early 1970s. The development since
    then can be traced back to two currents that could not seem more
    different from one another. At first, flexibility was demanded in the
    name of a critique of the value system imposed by bureaucratic-bourgeois
    society (including the traditional organization of the workforce). It
    originated in the new social movements that had formed in the late
    1960s. Later on, toward the end of the 1970s, it then became one of the
    central points of the neoliberal critique of the welfare state. With
    completely different motives, both sides sang the praises of autonomy
    and spontaneity while rejecting the disciplinary nature of hierarchical
    organization. They demanded individuality and diversity rather than
    conformity to prescribed roles. Experimentation, openness to []{#Page_18
    type="pagebreak" title="18"}new ideas, flexibility, and change were now
    established as fundamental values with positive connotations. Both
    movements operated with the attractive idea of personal freedom. The new
    social movements understood this in a social sense as the freedom of
    personal development and coexistence, whereas neoliberals understood it
    in an economic sense as the freedom of the market. In the 1980s, the
    neoliberal ideas prevailed in large part because some of the values,
    strategies, and methods propagated by the new social movements were
    removed from their political context and appropriated in order to
    breathe new life -- a "new spirit" -- into capitalism and thus to rescue
    industrial society from its crisis.[^17^](#c1-note-0017){#c1-note-0017a}
    An army of management consultants, restructuring experts, and new
    companies began to promote flat hierarchies, self-responsibility, and
    innovation; with these aims in mind, they set about reorganizing large
    corporations into small and flexible units. Labor and leisure were no
    longer supposed to be separated, for all aspects of a given person could
    be integrated into his or her work. In order to achieve economic success
    in this new capitalism, it became necessary for every individual to
    identify himself or herself with his or her profession. Large
    corporations were restructured in such a way that entire departments
    found themselves transformed into independent "profit centers." This
    happened in the name of creating more leeway for decision-making and of
    optimizing the entrepreneurial spirit on all levels, the goals being to
    increase value creation and to provide management with more fine-grained
    powers of intervention. These measures, in turn, created the need for
    computers and the need for them to be networked. Large corporations
    reacted in this way to the emergence of highly specialized small
    companies which, by networking and cooperating with other firms,
    succeeded in quickly and flexibly exploiting niches in the expanding
    global markets. In the management literature of the 1980s, the
    catchphrases for this were "company networks" and "flexible
    specialization."[^18^](#c1-note-0018){#c1-note-0018a} By the middle of
    the 1990s, the sociologist Manuel Castells was able to conclude that the
    actual productive entity was no longer the individual company but rather
    the network consisting of companies and corporate divisions of various
    sizes. In Castells\'s estimation, the decisive advantage of the network
    is its ability to customize its elements and their configuration
    []{#Page_19 type="pagebreak" title="19"}to suit the rapidly changing
    requirements of the "project" at
    hand.[^19^](#c1-note-0019){#c1-note-0019a} Aside from a few exceptions,
    companies in their trad­itional forms came to function above all as
    strategic control centers and as economic and legal units.

    This economic structural transformation was already well under way when
    the internet emerged as a mass medium around the turn of the millennium.
    As a consequence, change became more radical and penetrated into an
    increasing number of areas of value creation. The political agenda
    oriented itself toward the vision of "creative industries," a concept
    developed in 1997 by the newly elected British government under Tony
    Blair. A Creative Industries Task Force was established right away, and
    its first step was to identify "those activities which have their
    origins in individual creativity, skill and talent and which have the
    potential for wealth and job creation through the generation and
    exploit­ation of intellectual
    property."[^20^](#c1-note-0020){#c1-note-0020a} Like Fritz Machlup at
    the beginning of the 1960s, the task force brought together existing
    areas of activity into a new category. Such activities included
    advertising, computer games, architecture, music, arts and antique
    markets, publishing, design, software and computer services, fashion,
    television and radio, and film and video. The latter were elevated to
    matters of political importance on account of their potential to create
    wealth and jobs. Not least because of this clever presentation of
    categories -- no distinction was made between the BBC, an almighty
    public-service provider, and fledgling companies in precarious
    circumstances -- it was possible to proclaim not only that the creative
    industries were contributing a relevant portion of the nation\'s
    economic output, but also that this sector was growing at an especially
    fast rate. It was reported that, in London, the creative industries were
    already responsible for one out of every five new jobs. When compared
    with traditional terms of employment as regards income, benefits, and
    prospects for advancement, however, many of these positions entailed a
    considerable downgrade for the employees in question (who were now
    treated as independent contractors). This fact was either ignored or
    explicitly interpreted as a sign of the sector\'s particular
    dynamism.[^21^](#c1-note-0021){#c1-note-0021a} Around the turn of the
    new millennium, the idea that individual creativity plays a central role
    in the economy was given further traction by []{#Page_20
    type="pagebreak" title="20"}the sociologist and consultant Richard
    Florida, who argued that creativity was essential to the future of
    cities and even announced the rise of the "creative class." As to the
    preconditions that have to be met in order to tap into this source of
    wealth, he devised a simple formula that would be easy for municipal
    bureaucrats to understand: "technology, tolerance and talent." Talent,
    as defined by Florida, is based on individual creativity and education
    and manifests itself in the ability to generate new jobs. He was thus
    able to declare talent a central element of economic
    growth.[^22^](#c1-note-0022){#c1-note-0022a} In order to "unleash" these
    resources, what we need in addition to technology is, above all,
    tolerance; that is, "an open culture -- one that does not discriminate,
    does not force people into boxes, allows us to be ourselves, and
    validates various forms of family and of human
    identity."[^23^](#c1-note-0023){#c1-note-0023a}

    The idea that a public welfare state should ensure the social security
    of individuals was considered obsolete. Collective institutions, which
    could have provided a degree of stability for people\'s lifestyles, were
    dismissed or regarded as bureaucratic obstacles. The more or less
    directly evoked role model for all of this was the individual artist,
    who was understood as an individual entrepreneur, a sort of genius
    suitable for the masses. For Florida, a central problem was that,
    according to his own calculations, only about a third of the people
    living in North American and European cities were working in the
    "creative sector," while the innate creativity of everyone else was
    going to waste. Even today, the term "creative industry," along with the
    assumption that the internet will provide increased opportunities,
    serves to legitimize the effort to restructure all areas of the economy
    according to the needs of the knowledge economy and to privilege the
    network over the institution. In times of social cutbacks and empty
    public purses, especially in municipalities, this message was warmly
    received. One mayor, who as the first openly gay top politician in
    Germany exemplified tolerance for diverse lifestyles, even adopted the
    slogan "poor but sexy" for his city. Everyone was supposed to exploit
    his or her own creativity to discover new niches and opportunities for
    monet­ization -- a magic formula that was supposed to bring about a new
    urban revival. Today there is hardly a city in Europe that does not
    issue a report about its creative economy, []{#Page_21 type="pagebreak"
    title="21"}and nearly all of these reports cite, directly or indirectly,
    Richard Florida.

    As already seen in the context of the knowledge economy, so too in the
    case of creative industries do measurable social change, wishful
    thinking, and political agendas blend together in such a way that it is
    impossible to identify a single cause for the developments taking place.
    The consequences, however, are significant. Over the last two
    generations, the demands of the labor market have fundamentally changed.
    Higher education and the ability to acquire new knowledge independently
    are now, to an increasing extent, required and expected as
    qualifications and personal attributes. The desired or enforced ability
    to be flexible at work, the widespread cooperation across institutions,
    the uprooted nature of labor, and the erosion of collective models for
    social security have displaced many activities, which once took place
    within clearly defined institutional or personal limits, into a new
    interstitial space that is neither private nor public in the classical
    sense. This is the space of networks, communities, and informal
    cooperation -- the space of sharing and exchange that has since been
    enabled by the emergence of ubiquitous digital communication. It allows
    an increasing number of people, whether willingly or otherwise, to
    envision themselves as active producers of information, knowledge,
    capability, and meaning. And because it is associated in various ways
    with the space of market-based exchange and with the bourgeois political
    sphere, it has lasting effects on both. This interstitial space becomes
    all the more important as fewer people are willing or able to rely on
    traditional institutions for their economic security. For, within it,
    personal and digital-based networks can and must be developed as
    alternatives, regardless of whether they prove sustainable for the long
    term. As a result, more and more actors, each with their own claims to
    meaning, have been rushing away from the private personal sphere into
    this new interstitial space. By now, this has become such a normal
    practice that whoever is *not* active in this ever-expanding
    interstitial space, which is rapidly becoming the main social sphere --
    whoever, that is, lacks a publicly visible profile on social mass media
    like Facebook, or does not number among those producing information and
    meaning and is thus so inconspicuous online as []{#Page_22
    type="pagebreak" title="22"}to yield no search results -- now stands out
    in a negative light (or, in far fewer cases, acquires a certain prestige
    on account of this very absence).
    :::

    ::: {.section}
    ### The erosion of heteronormativity {#c1-sec-0004}

    In this (sometimes more, sometimes less) public space for the continuous
    production of social meaning (and its exploit­ation), there is no
    question that the professional middle class is
    over-represented.[^24^](#c1-note-0024){#c1-note-0024a} It would be
    short-sighted, however, to reduce those seeking autonomy and the
    recognition of individuality and social diversity to the role of poster
    children for the new spirit of
    capitalism.[^25^](#c1-note-0025){#c1-note-0025a} The new social
    movements, for instance, initiated a social shift that has allowed an
    increasing number of people to demand, if nothing else, the right to
    participate in social life in a self-determined manner; that is,
    according to their own standards and values.

    Especially effective was the critique of patriarchal and heteronormative
    power relations, modes of conduct, and
    identities.[^26^](#c1-note-0026){#c1-note-0026a} In the context of the
    political upheavals at the end of the 1960s, the new women\'s and gay
    movements developed into influential actors. Their greatest achievement
    was to establish alternative cultural forms, lifestyles, and strategies
    of action in or around the mainstream of society. How this was done can
    be demonstrated by tracing, for example, the development of the gay
    movement in West Germany.

    In the fall of 1969, the liberalization of Paragraph 175 of the German
    Criminal Code came into effect. From then on, sexual activity between
    adult men was no longer punishable by law (women were not mentioned in
    this context). For the first time, a man could now express himself as a
    homosexual outside of semi-private space without immediately being
    exposed to the risk of criminal prosecution. This was a necessary
    precondition for the ability to defend one\'s own rights. As early as
    1971, the struggle for the recognition of gay life experiences reached
    the broader public when Rosa von Praunheim\'s film *It Is Not the
    Homosexual Who Is Perverse, but the Society in Which He Lives* was
    screened at the Berlin International Film Festival and then, shortly
    thereafter, broadcast on public television in North Rhine-Westphalia.
    The film, which is firmly situated in the agitprop tradition,
    []{#Page_23 type="pagebreak" title="23"}follows a young provincial man
    through the various milieus of Berlin\'s gay subcultures: from a
    monogamous relationship to nightclubs and public bathrooms until, at the
    end, he is enlightened by a political group of men who explain that it
    is not possible to lead a free life in a niche, as his own emancipation
    can only be achieved by a transformation of society as a whole. The film
    closes with a not-so-subtle call to action: "Out of the closets, into
    the streets!" Von Praunheim understood this emancipation to be a process
    that encompassed all areas of life and had to be carried out in public;
    it could only achieve success, moreover, in solidarity with other
    freedom movements such as the Black Panthers in the United States and
    the new women\'s movement. The goal, according to this film, is to
    articulate one\'s own identity as a specific and differentiated identity
    with its own experiences, values, and reference systems, and to anchor
    this identity within a society that not only tolerates it but also
    recognizes it as having equal validity.

    At first, however, the film triggered vehement controversies, even
    within the gay scene. The objection was that it attacked the gay
    subculture, which was not yet prepared to defend itself publicly against
    discrimination. Despite or (more likely) because of these controversies,
    more than 50 groups of gay activists soon formed in Germany. Such
    groups, largely composed of left-wing alternative students, included,
    for instance, the Homosexuelle Aktion Westberlin (HAW) and the Rote
    Zelle Schwul (RotZSchwul) in Frankfurt am
    Main.[^27^](#c1-note-0027){#c1-note-0027a} One focus of their activities
    was to have Paragraph 175 struck entirely from the legal code (which was
    not achieved until 1994). This cause was framed within a general
    struggle to overcome patriarchy and capitalism. At the earliest gay
    demonstrations in Germany, which took place in Münster in April 1972,
    protesters rallied behind the following slogan: "Brothers and sisters,
    gay or not, it is our duty to fight capitalism." This was understood as
    a necessary subordination to the greater struggle against what was known
    in the terminology of left-wing radical groups as the "main
    contradiction" of capitalism (that between capital and labor), and it
    led to strident differences within the gay movement. The dispute
    escalated during the next year. After the so-called *Tuntenstreit*, or
    "Battle of the Queens," which was []{#Page_24 type="pagebreak"
    title="24"}initiated by activists from Italy and France who had appeared
    in drag at the closing ceremony of the HAW\'s Spring Meeting in West
    Berlin, the gay movement was divided, or at least moving in a new
    direction. At the heart of the matter were the following questions: "Is
    there an inherent (many speak of an autonomous) position that gays hold
    with respect to the issue of homosexuality? Or can a position on
    homosexuality only be derived in association with the traditional
    workers\' movement?"[^28^](#c1-note-0028){#c1-note-0028a} In other
    words, was discrimination against homosexuality part of the social
    divide caused by capitalism (that is, one of its "ancillary
    contradictions") and thus only to be overcome by overcoming capitalism
    itself, or was it something unrelated to the "essence" of capitalism, an
    independent conflict requiring different strategies and methods? This
    conflict could never be fully resolved, but the second position, which
    was more interested in overcoming legal, social, and cultural
    discrimination than in struggling against economic exploitation, and
    which focused specifically on the social liberation of gays, proved to
    be far more dynamic in the long term. This was not least because both
    the old and new left were themselves not free of homophobia and because
    the entire radical student movement of the 1970s fell into crisis.

    Over the course of the 1970s and 1980s, "aesthetic self-empowerment" was
    realized through the efforts of artistic and (increasingly) commercial
    producers of images, texts, and
    sounds.[^29^](#c1-note-0029){#c1-note-0029a} Activists, artists, and
    intellectuals developed a language with which they could speak
    assertively in public about topics that had previously been taboo.
    Inspired by the expression "gay pride," which originated in the United
    States, they began to use the term *schwul* ("gay"), which until then
    had possessed negative connotations, with growing confidence. They
    founded numerous gay and lesbian cultural initiatives, theaters,
    publishing houses, magazines, bookstores, meeting places, and other
    associations in order to counter the misleading or (in their eyes)
    outright false representations of the mass media with their own
    multifarious media productions. In doing so, they typically followed a
    dual strategy: on the one hand, they wanted to create a space for the
    members of the movement in which it would be possible to formulate and
    live different identities; on the other hand, they were fighting to be
    accepted by society at large. While []{#Page_25 type="pagebreak"
    title="25"}a broader and broader spectrum of gay positions, experiences,
    and aesthetics was becoming visible to the public, the connection to
    left-wing radical contexts became weaker. Founded as early as 1974, and
    likewise in West Berlin, the General Homosexual Working Group
    (Allgemeine Homosexuelle Arbeitsgemeinschaft) sought to integrate gay
    politics into mainstream society by defining the latter -- on the basis
    of bourgeois, individual rights -- as a "politics of
    anti-discrimination." These efforts achieved a milestone in 1980 when,
    in the run-up to the parliamentary election, a podium discussion was
    held with representatives of all major political parties on the topic of
    the law governing sexual offences. The discussion took place in the
    Beethovenhalle in Bonn, which was the largest venue for political events
    in the former capital. Several participants considered the event to be a
    "disaster,"[^30^](#c1-note-0030){#c1-note-0030a} for it revived a number
    of internal conflicts (not least that between revolutionary and
    integrative positions). Yet the fact remains that representatives were
    present from every political party, and this alone was indicative of an
    unprecedented amount of public awareness for those demanding equal
    rights.

    The struggle against discrimination and for social recognition reached
    an entirely new level of urgency with the outbreak of HIV/AIDS. In 1983,
    the magazine *Der Spiegel* devoted its first cover story to the disease,
    thus bringing it to the awareness of the broader public. In the same
    year, the non-profit organization Deutsche Aids-Hilfe was founded to
    prevent further cases of discrimination, for *Der Spiegel* was not the
    only publication at the time to refer to AIDS as a "homosexual
    epidemic."[^31^](#c1-note-0031){#c1-note-0031a} The struggle against
    HIV/AIDS required a comprehensive mobilization. Funding had to be raised
    in order to deal with the social repercussions of the epidemic, to teach
    people about safe sexual practices for everyone and to direct research
    toward discovering causes and developing potential cures. The immediate
    threat that AIDS represented, especially while so little was known about
    the illness and its treatment remained a distant hope, created an
    impetus for mobilization that led to alliances between the gay movement,
    the healthcare system, and public authorities. Thus, the AIDS Inquiry
    Committee, sponsored by the conservative Christian Democratic Union,
    concluded in 1988 that, in the fight against the illness, "the
    homosexual subculture is []{#Page_26 type="pagebreak"
    title="26"}especially important. This informal structure should
    therefore neither be impeded nor repressed but rather, on the contrary,
    recognized and supported."[^32^](#c1-note-0032){#c1-note-0032a} The AIDS
    crisis proved to be a catalyst for advancing the integration of gays
    into society and for expanding what could be regarded as acceptable
    lifestyles, opinions, and cultural practices. As a consequence,
    homosexuals began to appear more frequently in the media, though their
    presence would never match that of hetero­sexuals. As of 1985, the
    television show *Lindenstraße* featured an openly gay protagonist, and
    the first kiss between men was aired in 1987. The episode still provoked
    a storm of protest -- Bayerische Rundfunk refused to broadcast it a
    second time -- but this was already a rearguard action and the
    integration of gays (and lesbians) into the social mainstream continued.
    In 1993, the first gay and lesbian city festival took place in Berlin,
    and the first Rainbow Parade was held in Vienna in 1996. In 2002, the
    Cologne Pride Day involved 1.2 million participants and attendees, thus
    surpassing for the first time the attendance at the traditional Rose
    Monday parade. By the end of the 1990s, the sociologist Rüdiger Lautmann
    was already prepared to maintain: "To be homosexual has become
    increasingly normalized, even if homophobia lives on in the depths of
    the collective disposition."[^33^](#c1-note-0033){#c1-note-0033a} This
    normalization was also reflected in a study published by the Ministry of
    Justice in the year 2000, which stressed "the similarity between
    homosexual and heterosexual relationships" and, on this basis, made an
    argument against discrimination.[^34^](#c1-note-0034){#c1-note-0034a}
    Around the year 2000, however, the classical gay movement had already
    passed its peak. A profound transformation had begun to take place in
    the middle of the 1990s. It lost its character as a new social movement
    (in the style of the 1970s) and began to splinter inwardly and
    outwardly. One could say that it transformed from a mass movement into a
    multitude of variously networked communities. The clearest sign of this
    transformation is the abbreviation "LGBT" (lesbian, gay, bisexual, and
    transgender), which, since the mid-1990s, has represented the internal
    heterogeneity of the movement as it has shifted toward becoming a
    network.[^35^](#c1-note-0035){#c1-note-0035a} At this point, the more
    radical actors were already speaking against the normalization of
    homosexuality. Queer theory, for example, was calling into question the
    "essentialist" definition of gender []{#Page_27 type="pagebreak"
    title="27"}-- that is, any definition reducing it to an immutable
    essence -- with respect to both its physical dimension (sex) and its
    social and cultural dimension (gender
    proper).[^36^](#c1-note-0036){#c1-note-0036a} It thus opened up a space
    for the articulation of experiences, self-descriptions, and lifestyles
    that, on every level, are located beyond the classical attributions of
    men and women. A new generation of intellectuals, activists, and artists
    took the stage and developed -- yet again through acts of aesthetic
    self-empowerment -- a language that enabled them to import, with
    confidence, different self-definitions into the public sphere. An
    example of this is the adoption of inclusive plural forms in German
    (*Aktivist\_innen* "activists," *Künstler\_innen* "artists"), which draw
    attention to the gaps and possibilities between male and female
    identities that are also expressed in the language itself. Just as with
    the terms "gay" or *schwul* some 30 years before, in this case, too, an
    important element was the confident and public adoption and semantic
    conversion of a formerly insulting word ("queer") by the very people and
    communities against whom it used to be
    directed.[^37^](#c1-note-0037){#c1-note-0037a} Likewise observable in
    these developments was the simultaneity of social (amateur) and
    artistic/scientific (professional) cultural production. The goal,
    however, was less to produce a clear antithesis than it was to oppose
    rigid attributions by underscoring mutability, hybridity, and
    uniqueness. Both the scope of what could be expressed in public and the
    circle of potential speakers expanded yet again. And, at least to some
    extent, the drag queen Conchita Wurst popularized complex gender
    constructions that went beyond the simple woman/man dualism. All of that
    said, the assertion by Rüdiger Lautmann quoted above -- "homophobia
    lives on in the depths of the collective dis­position" -- continued to
    hold true.

    If the gay movement is representative of the social liber­ation of the
    1970s and 1980s, then it is possible to regard its transformation into
    the LGBT movement during the 1990s -- with its multiplicity and fluidity
    of identity models and its stress on mutability and hybridity -- as a
    sign of the reinvention of this project within the context of an
    increasingly dominant digital condition. With this transformation,
    however, the diversification and fluidification of cultural practices
    and social roles have not yet come to an end. Ways of life that were
    initially subcultural and facing existential pressure []{#Page_28
    type="pagebreak" title="28"}are gradually entering the mainstream. They
    are expanding the range of readily available models of identity for
    anyone who might be interested, be it with respect to family forms
    (e.g., patchwork families, adoption by same-sex couples), diets (e.g.,
    vegetarianism and veganism), healthcare (e.g., anti-vaccination), or
    other principles of life and belief. All of them are seeking public
    recognition for a new frame of reference for social meaning that has
    originated from their own activity. This is necessarily a process
    characterized by conflicts and various degrees of resistance, including
    right-wing populism that seeks to defend "traditional values," but many
    of these movements will ultimately succeed in providing more people with
    the opportunity to speak in public, thus broadening the palette of
    themes that are considered to be important and legitimate.
    :::

    ::: {.section}
    ### Beyond center and periphery {#c1-sec-0005}

    In order to reach a better understanding of the complexity involved in
    the expanding social basis of cultural production, it is necessary to
    shift yet again to a different level. For, just as it would be myopic to
    examine the multiplication of cultural producers only in terms of
    professional knowledge workers from the middle class, it would likewise
    be insufficient to situate this multiplication exclusively in the
    centers of the West. The entire system of categories that justified the
    differentiation between the cultural "center" and the cultural
    "periphery" has begun to falter. This complex and multilayered process
    has been formulated and analyzed by the theory of "post-colonialism."
    Long before digital media made the challenge of cultural multiplicity a
    quotidian issue in the West, proponents of this theory had developed
    languages and terminologies for negotiating different positions without
    needing to impose a hierarchical order.

    Since the 1970s, the theoretical current of post-colonialism has been
    examining the cultural and epistemic dimensions of colonialism that,
    even after its end as a territorial system, have remained responsible
    for the continuation of dependent relations and power differentials. For
    my purposes -- which are to develop a European perspective on the
    factors ensuring that more and more people are able to participate in
    cultural []{#Page_29 type="pagebreak" title="29"}production -- two
    points are especially relevant because their effects reverberate in
    Europe itself. First is the deconstruction of the categories "West" (in
    the sense of the center) and "East" (in the sense of the periphery). And
    second is the focus on hybridity as a specific way for non-Western
    actors to deal with the dominant cultures of former colonial powers,
    which have continued to determine significant portions of globalized
    culture. The terms "West" and "East," "center" and "periphery," do not
    simply describe existing conditions; rather, they are categories that
    contribute, in an important way, to the creation of the very conditions
    that they presume to describe. This may sound somewhat circular, but it
    is precisely from this circularity that such cultural classifications
    derive their strength. The world that they illuminate is immersed in
    their own light. The category "East" -- or, to use the term of the
    literary theorist Edward Said,
    "orientalism"[^38^](#c1-note-0038){#c1-note-0038a} -- is a system of
    representation that pervades Western thinking. Within this system,
    Europe or the West (as the center) and the East (as the periphery)
    represent asymmetrical and antithetical concepts. This construction
    achieves a dual effect. As a self-description, on the one hand, it
    contributes to the formation of our own identity, for Europeans
    attrib­ute to themselves and to their continent such features as
    "rationality," "order," and "progress," while on the other hand
    identifying the alternative with "superstition," "chaos," or
    "stagnation." The East, moreover, is used as an exotic projection screen
    for our own suppressed desires. According to Said, a representational
    system of this sort can only take effect if it becomes "hegemonic"; that
    is, if it is perceived as self-evident and no longer as an act of
    attribution but rather as one of description, even and precisely by
    those against whom the system discriminates. Said\'s accomplishment is
    to have worked out how far-reaching this system was and, in many areas,
    it remains so today. It extended (and extends) from scientific
    disciplines, whose researchers discussed (until the 1980s) the theory of
    "oriental despotism,"[^39^](#c1-note-0039){#c1-note-0039a} to literature
    and art -- the motif of the harem was especially popular, particularly
    in paintings of the late nineteenth
    century[^40^](#c1-note-0040){#c1-note-0040a} -- all the way to everyday
    culture, where, as of 1913 in the United States, the cigarette brand
    Camel (introduced to compete with the then-leading brand, Fatima) was
    meant to evoke the []{#Page_30 type="pagebreak" title="30"}mystique and
    sensuality of the Orient.[^41^](#c1-note-0041){#c1-note-0041a} This
    system of representation, however, was more than a means of describing
    oneself and others; it also served to legitimize the allocation of all
    knowledge and agency on to one side, that of the West. Such an order was
    not restricted to culture; it also created and legitimized a sense of
    domination for colonial projects.[^42^](#c1-note-0042){#c1-note-0042a}
    This cultural legitimation, as Said points out, also persists after the
    end of formal colonial domination and continues to marginalize the
    postcolonial subjects. As before, they are unable to speak for
    themselves and therefore remain in the dependent periphery, which is
    defined by their subordinate position in relation to the center. Said
    directed the focus of critique to this arrangement of center and
    periphery, which he saw as being (re)produced and legitimized on the
    cultural level. From this arose the demand that everyone should have the
    right to speak, to place him- or herself in the center. To achieve this,
    it was necessary first of all to develop a language -- indeed, a
    cultural landscape -- that can manage without a hegemonic center and is
    thus oriented toward multiplicity instead of
    uniformity.[^43^](#c1-note-0043){#c1-note-0043a}

    A somewhat different approach has been taken by the literary theorist
    Homi K. Bhabha. He proceeds from the idea that the colonized never fully
    passively adopt the culture of the colonialists -- the "English book,"
    as he calls it. Their previous culture is never simply wiped out and
    replaced by another. What always and necessarily occurs is rather a
    process of hybridization. This concept, according to Bhabha,

    ::: {.extract}
    suggests that all of culture is constructed around negotiations and
    conflicts. Every cultural practice involves an attempt -- sometimes
    good, sometimes bad -- to establish authority. Even classical works of
    art, such as a painting by Brueghel or a composition by Beethoven, are
    concerned with the establishment of cultural authority. Now, this poses
    the following question: How does one function as a negotiator when
    one\'s own sense of agency is limited, for instance, on account of being
    excluded or oppressed? I think that, even in the role of the underdog,
    there are opportunities to upend the imposed cultural authorities -- to
    accept some aspects while rejecting others. It is in this way that
    symbols of authority are hybridized and made into something of one\'s
    own. For me, hybridization is not simply a mixture but rather a
    []{#Page_31 type="pagebreak" title="31"}strategic and selective
    appropriation of meanings; it is a way to create space for negotiators
    whose freedom and equality are
    endangered.[^44^](#c1-note-0044){#c1-note-0044a}
    :::

    Hybridization is thus a cultural strategy for evading marginality that
    is imposed from the outside: subjects, who from the dominant perspective
    are incapable of doing so, appropriate certain aspects of culture for
    themselves and transform them into something else. What is decisive is
    that this hybrid, created by means of active and unauthorized
    appropriation, opposes the dominant version and the resulting speech is
    thus legitimized from another -- that is, from one\'s own -- position.
    In this way, a cultural engagement is set under way and the superiority
    of one meaning or another is called into question. Who has the right to
    determine how and why a relationship with others should be entered,
    which resources should be appropriated from them, and how these
    resources should be used? At the heart of the matter lie the abilities
    of speech and interpretation; these can be seized in order to create
    space for a "cultural hybridity that entertains difference without an
    assumed or imposed hierarchy."[^45^](#c1-note-0045){#c1-note-0045a}

    At issue is thus a strategy for breaking down hegemonic cultural
    conditions, which distribute agency in a highly uneven manner, and for
    turning one\'s own cultural production -- which has been dismissed by
    cultural authorities as flawed, misconceived, or outright ignorant --
    into something negotiable and independently valuable. Bhabha is thus
    interested in fissures, differences, diversity, multiplicity, and
    processes of negotiation that generate something like shared meaning --
    culture, as he defines it -- instead of conceiving of it as something
    that precedes these processes and is threatened by them. Accordingly, he
    proceeds not from the idea of unity, which is threatened whenever
    "others" are empowered to speak and needs to be preserved, but rather
    from the irreducible multiplicity that, through laborious processes, can
    be brought into temporary and limited consensus. Bhabha\'s vision of
    culture is one without immutable authorities, interpretations, and
    truths. In theory, everything can be brought to the table. This is not a
    situation in which anything goes, yet the central meaning of
    negotiation, the contextuality of consensus, and the mutability of every
    frame of reference []{#Page_32 type="pagebreak" title="32"}-- none of
    which can be shared equally by everyone -- are always potentially
    negotiable.

    Post-colonialism draws attention to the "disruptive power of the
    excluded-included third," which becomes especially virulent when it
    "emerges in the middle of semantic
    structures."[^46^](#c1-note-0046){#c1-note-0046a} The recognition of
    this power reveals the increasing cultural independence of those
    formerly colonized, and it also transforms the cultural self-perception
    of the West, for, even in Western nations that were not significant
    colonial powers, there are multifaceted tensions between dominant
    cultures and those who are on the defensive against discrimination and
    attributions by others. Instead of relying on the old recipe of
    integration through assimilation (that is, the dissolution of the
    "other"), the right to self-determined difference is being called for
    more emphatically. In such a manner, collective identities, such as
    national identities, are freed from their questionable appeals to
    cultural homogeneity and essentiality, and reconceived in terms of the
    experience of immanent difference. Instead of one binding and
    unnegotiable frame of reference for everyone, which hierarchizes
    individual pos­itions and makes them appear unified, a new order without
    such limitations needs to be established. Ultimately, the aim is to
    provide nothing less than an "alternative reading of
    modernity,"[^47^](#c1-note-0047){#c1-note-0047a} which influences both
    the construction of the past and the modalities of the future. For
    European culture in particular, such a project is an immense challenge.

    Of course, these demands do not derive their everyday relevance
    primarily from theory but rather from the experiences of
    (de)colonization, migration, and globalization. Multifaceted as it is,
    however, the theory does provide forms and languages for articulating
    these phenomena, legitimizing new positions in public debates, and
    attacking persistent mechanisms of cultural marginalization. It helps to
    empower broader societal groups to become actively involved in cultural
    processes, namely people, such as migrants and their children, whose
    identity and experience are essentially shaped by non-Western cultures.
    The latter have been giving voice to their experiences more frequently
    and with greater confidence in all areas of public life, be it in
    politics, literature, music, or
    art.[^48^](#c1-note-0048){#c1-note-0048a} In Germany, for instance, the
    films by Fatih Akin (*Head-On* from 2004 and *Soul Kitchen* from 2009,
    to []{#Page_33 type="pagebreak" title="33"}name just two), in which the
    experience of immigration is represented as part of the German
    experience, have reached a wide public audience. In 2002, the group
    Kanak Attak organized a series of conferences with the telling motto *no
    integración*, and these did much to introduce postcolonial positions to
    the debates taking place in German-speaking
    countries.[^49^](#c1-note-0049){#c1-note-0049a} For a long time,
    politicians with "migration backgrounds" were considered to be competent
    in only one area, namely integration policy. This has since changed,
    though not entirely. In 2008, for instance, Cem Özdemir was elected
    co-chair of the Green Party and thus shares responsibility for all of
    its political positions. Developments of this sort have been enabled
    (and strengthened) by a shift in society\'s self-perception. In 2014,
    Cemile Giousouf, the integration commissioner for the conservative
    CDU/CSU alliance in the German Parliament, was able to make the
    following statement without inciting any controversy: "Over the past few
    years, Germany has become a modern land of
    immigration."[^50^](#c1-note-0050){#c1-note-0050a} A remarkable
    proclamation. Not ten years earlier, her party colleague Norbert Lammert
    had expressed, in his function as parliamentary president, interest in
    reviving the debate about the term "leading culture." The increasingly
    well-educated migrants of the first, second, or third gener­ation no
    longer accept the choice of being either marginalized as an exotic
    representative of the "other" or entirely assimilated. Rather, they are
    insisting on being able to introduce their specific experience as a
    constitutive contribution to the formation of the present -- in
    association and in conflict with other contributions, but at the same
    level and with the same legitimacy. It is no surprise that various forms
    of discrimin­ation and violence against "foreigners" not only continue
    in everyday life but have also been increasing in reaction to this new
    situation. Ultimately, established claims to power are being called into
    question.

    To summarize, at least three secular historical tendencies or movements,
    some of which can be traced back to the late nineteenth century but each
    of which gained considerable momentum during the last third of the
    twentieth (the spread of the knowledge economy, the erosion of
    heteronormativity, and the focus of post-colonialism on cultural
    hybridity), have greatly expanded the sphere of those who actively
    negotiate []{#Page_34 type="pagebreak" title="34"}social meaning. In
    large part, the patterns and cultural foundations of these processes
    developed long before the internet. Through the use of the internet, and
    through the experiences of dealing with it, they have encroached upon
    far greater portions of all societies.
    :::
    :::

    ::: {.section}
    The Culturalization of the World {#c1-sec-0006}
    --------------------------------

    The number of participants in cultural processes, however, is not the
    only thing that has increased. Parallel to that development, the field
    of the cultural has expanded as well -- that is, those areas of life
    that are not simply characterized by unalterable necessities, but rather
    contain or generate competing options and thus require conscious
    decisions.

    The term "culturalization of the economy" refers to the central position
    of knowledge-based, meaning-based, and affect-oriented processes in the
    creation of value. With the emergence of consumption as the driving
    force behind the production of goods and the concomitant necessity of
    having not only to satisfy existing demands but also to create new ones,
    the cultural and affective dimensions of the economy began to gain
    significance. I have already discussed the beginnings of product
    staging, advertising, and public relations. In addition to all of the
    continuities that remain with us from that time, it is also possible to
    point out a number of major changes that consumer society has undergone
    since the late 1960s. These changes can be delineated by examining the
    greater role played by design, which has been called the "core
    discipline of the creative
    economy."[^51^](#c1-note-0051){#c1-note-0051a}

    As a field of its own, design originated alongside industrialization,
    when, in collaborative processes, the activities of planning and
    designing were separated from those of carrying out
    production.[^52^](#c1-note-0052){#c1-note-0052a} It was not until the
    modern era that designers consciously endeavored to seek new forms for
    the logic inherent to mass production. With the aim of economic
    efficiency, they intended their designs to optimize the clearly defined
    functions of anonymous and endlessly reproducible objects. At the end of
    the nineteenth century, the architect Louis Sullivan, whose buildings
    still distinguish the skyline of Chicago, condensed this new attitude
    into the famous axiom []{#Page_35 type="pagebreak" title="35"}"form
    follows function." Mies van der Rohe, working as an architect in Chicago
    in the middle of the twentieth century, supplemented this with a pithy
    and famous formulation of his own: "less is more." The rationality of
    design, in the sense of isolating and improving specific functions, and
    the economical use of resources were of chief importance to modern
    (industrial) designers. Even the ten design principles of Dieter Rams,
    who led the design division of the consumer products company Braun from
    1965 to 1991 -- one of the main sources of inspiration for Jonathan Ive,
    Apple\'s chief design officer -- aimed to make products "usable,"
    "understandable," "honest," and "long-lasting." "Good design," according
    to his guiding principle, "is as little design as
    possible."[^53^](#c1-note-0053){#c1-note-0053a} This orientation toward
    the technical and functional promised to solve problems for everyone in
    a long-term and binding manner, for the inherent material and design
    qual­ities of an object were supposed to make it independent from
    changing times and from the tastes of consumers.

    ::: {.section}
    ### Beyond the object {#c1-sec-0007}

    At the end of the 1960s, a new generation of designers rebelled against
    this industrial and instrumental rationality, which was now felt to be
    authoritarian, soulless, and reductionist. In the works associated with
    "anti-design" or "radical design," the objectives of the discipline were
    redefined and a new formal language was developed. In the place of
    tech­nical and functional optimization, recombination -- ecological
    recycling or the postmodern interplay of forms -- emerged as a design
    method and aesthetic strategy. Moreover, the aspiration of design
    shifted from the individual object to its entire social and material
    environment. The processes of design and production, which had been
    closed off from one another and restricted to specialists, were opened
    up precisely to encourage the participation of non-designers, be it
    through interdisciplinary cooperation with other types of professions or
    through the empowerment of laymen. The objectives of design were
    radically expanded: rather than ending with the completion of an
    individual product, it was now supposed to engage with society. In the
    sense of cybernetics, this was regarded as a "system," controlled by
    feedback processes, []{#Page_36 type="pagebreak" title="36"}which
    connected social, technical, and biological dimensions to one
    another.[^54^](#c1-note-0054){#c1-note-0054a} Design, according to this
    new approach, was meant to be a "socially significant
    activity."[^55^](#c1-note-0055){#c1-note-0055a}

    Embedded in the social movements of the 1960s and 1970s, this new
    generation of designers was curious about the social and political
    potential of their discipline, and about possibilities for promoting
    flexibility and autonomy instead of rigid industrial efficiency. Design
    was no longer expected to solve problems once and for all, for such an
    idea did not correspond to the self-perception of an open and mutable
    society. Rather, it was expected to offer better opportun­ities for
    enabling people to react to continuously changing conditions. A radical
    proposal was developed by the Italian designer Enzo Mari, who in 1974
    published his handbook *Autoprogettazione* (Self-Design). It contained
    19 simple designs with which people could make, on their own,
    aesthetically and functionally sophisticated furniture out of pre-cut
    pieces of wood. In this case, the designs themselves were less important
    than the critique of conventional design as elitist and of consumer
    society as alienated and wasteful. Mari\'s aim was to reconceive the
    relations among designers, the manufacturing industry, and users.
    Increasingly, design came to be understood as a holistic and open
    process. Victor Papanek, the founder of ecological design, took things a
    step further. For him, design was "basic to all human activity. The
    planning and patterning of any act towards a desired, foreseeable end
    constitutes the design process. Any attempt to separate design, to make
    it a thing-by-itself, works counter to the inherent value of design as
    the primary underlying matrix of
    life."[^56^](#c1-note-0056){#c1-note-0056a}

    Potentially all aspects of life could therefore fall under the purview
    of design. This came about from the desire to oppose industrialism,
    which was blind to its catastrophic social and ecological consequences,
    with a new and comprehensive manner of seeing and acting that was
    unrestricted by economics.

    Toward the end of the 1970s, this expanded notion of design owed less
    and less to emancipatory social movements, and its socio-political goals
    began to fall by the wayside. Three fundamental patterns survived,
    however, which go beyond design and remain characteristic of the
    culturalization []{#Page_37 type="pagebreak" title="37"}of the economy:
    the discovery of the public as emancipated users and active
    participants; the use of appropriation, transformation, and
    recombination as methods for creating ever-new aesthetic
    differentiations; and, finally, the intention of shaping the lifeworld
    of the user.[^57^](#c1-note-0057){#c1-note-0057a}

    As these patterns became depoliticized and commercialized, the focus of
    designing the "lifeworld" shifted more and more toward designing the
    "experiential world." By the end of the 1990s, this had become so
    normalized that even management consultants could assert that
    "\[e\]xperiences represent an existing but previously unarticulated
    *genre of economic output*."[^58^](#c1-note-0058){#c1-note-0058a} It was
    possible to define the dimensions of the experiential world in various
    ways. For instance, it could be clearly delimited and product-oriented,
    like the flagship stores introduced by Nike in 1990, which, with their
    elaborate displays, were meant to turn shopping into an experience. This
    experience, as the company\'s executives hoped, radiated outward and
    influenced how the brand was perceived as a whole. The experiential
    world could also, however, be conceived in somewhat broader terms, for
    instance by design­ing entire institutions around the idea of creating a
    more attractive work environment and thereby increasing the commitment
    of employees. This approach is widespread today in creative industries
    and has become popularized through countless stories about ping-pong
    tables, gourmet cafeterias, and massage rooms in certain offices. In
    this case, the process of creativity is applied back to itself in order
    to systematize and optimize a given workplace\'s basis of operation. The
    development is comparable to the "invention of invention" that
    characterized industrial research around the end of the nineteenth
    century, though now the concept has been re­located to the field of
    knowledge production.

    Yet the "experiential world" can be expanded even further, for instance
    when entire cities attempt to make themselves attractive to
    international clientele and compete with others by building spectacular
    museums or sporting arenas. Displays in cities, as well as a few other
    central locations, are regularly constructed in order to produce a
    particular experience. This also entails, however, that certain forms of
    use that fail to fit the "urban
    script"[^59^](#c1-note-0059){#c1-note-0059a} are pushed to the margins
    or driven away.[^60^](#c1-note-0060){#c1-note-0060a} Thus, today, there
    is hardly a single area of life to []{#Page_38 type="pagebreak"
    title="38"}which the strategies and methods of design do not have
    access, and this access occurs at all levels. For some time, design has
    not been a purely visible matter, restricted to material objects; it
    rather forms and controls all of the senses. Cities, for example, have
    come to be understood increasingly as "sound spaces" and have
    accordingly been reconfigured with the goal of modulating their various
    noises.[^61^](#c1-note-0061){#c1-note-0061a} Yet design is no longer
    just a matter of objects, processes, and experiences. By now, in the
    context of reproductive medicine, it has even been applied to the
    biological foundations of life ("designer babies"). I will revisit this
    topic below.
    :::

    ::: {.section}
    ### Culture everywhere {#c1-sec-0008}

    Of course, design is not the only field of culture that has imposed
    itself over society as a whole. A similar development has occurred in
    the field of advertising, which, since the 1970s, has been integrated
    into many more physical and social spaces and by now has a broad range
    of methods at its disposal. Advertising is no longer found simply on
    billboards or in display windows. In the form of "guerilla marketing" or
    "product placement," it has penetrated every space and occupied every
    discourse -- by blending with political messages, for instance -- and
    can now even be spread, as "viral marketing," by the addressees of the
    advertisements themselves. Similar processes can be observed in the
    fields of art, fashion, music, theater, and sports. This has taken place
    perhaps most radically in the field of "gaming," which has drawn upon
    technical progress in the most direct possible manner and, with the
    spread of powerful computers and mobile applications, has left behind
    the confines of the traditional playing field. In alternate reality
    games, the realm of the virtual and fictitious has also been
    transcended, as physical spaces have been overlaid with their various
    scripts.[^62^](#c1-note-0062){#c1-note-0062a}

    This list could be extended, but the basic trend is clear enough,
    especially as the individual fields overlap and mutually influence one
    another. They are blending into a single interdependent field for
    generating social meaning in the form of economic activity. Moreover,
    through digitalization and networking, many new opportunities have
    arisen for large-scale involvement by the public in design processes.
    Thanks []{#Page_39 type="pagebreak" title="39"}to new communication
    technologies and flexible production processes, today\'s users can
    personalize and create products to suit their wishes. Here, the spectrum
    extends from tiny batches of creative-industrial products all the way to
    global processes of "mass customization," in which factory-based mass
    production is combined with personalization. One of the first
    applications of this was introduced in 1999 when, through its website, a
    sporting-goods company allowed customers to design certain elements of a
    shoe by altering it within a set of guidelines. This was taken a step
    further by the idea of "user-centered innovation," which relies on the
    specific knowledge of users to enhance a product, with the additional
    hope of discovering unintended applications and transforming these into
    new areas of business.[^63^](#c1-note-0063){#c1-note-0063a} It has also
    become possible for end users to take over the design process from the
    beginning, which has become considerably easier with the advent of
    specialized platforms for exchanging knowledge, alongside semi-automated
    production tools such as mechanical mills and 3D printers.
    Digitalization, which has allowed all content to be processed, and
    networking, which has created an endless amount of content ("raw
    material"), have turned appropriation and recombination into general
    methods of cultural production.[^64^](#c1-note-0064){#c1-note-0064a}
    This phenomenon will be examined more closely in the next chapter.

    Both the involvement of users in the production process and the methods
    of appropriation and recombination are extremely information-intensive
    and communication-intensive. Without the corresponding technological
    infrastructure, neither could be achieved efficiently or on a large
    scale. This was evident in the 1970s, when such approaches never made it
    beyond subcultures and conceptual studies. With today\'s search engines,
    every single user can trawl through an amount of information that, just
    a generation ago, would have been unmanageable even by professional
    archivists. A broad array of communication platforms (together with
    flexible production capacities and efficient logistics) not only weakens
    the contradiction between mass fabrication and personalization; it also
    allows users to network directly with one another in order to develop
    specialized knowledge together and thus to enable themselves to
    intervene directly in design processes, both as []{#Page_40
    type="pagebreak" title="40"}willing participants in and as critics of
    flexible global production processes.
    :::
    :::

    ::: {.section}
    The Technologization of Culture {#c1-sec-0009}
    -------------------------------

    That society is dependent on complex information technologies in order
    to organize its constitutive processes is, in itself, nothing new.
    Rather, this began as early as the late nineteenth century. It is
    directly correlated with the expansion and acceleration of the
    circulation of goods, which came about through industrialization. As the
    historian and sociologist James Beniger has noted, this led to a
    "control crisis," for administrative control centers were faced with the
    problem of losing sight of what was happening in their own factories,
    with their suppliers, and in the important markets of the time.
    Management was in a bind: decisions had to be made either on the basis
    of insufficient information or too late. The existing administrative and
    control mechanisms could no longer deal with the rapidly increasing
    complexity and time-sensitive nature of extensively organized production
    and distribution. The office became more important, and ever more people
    were needed there to fulfill a growing number of functions. Yet this was
    not enough for the crisis to subside. The old administrative methods,
    which involved manual information processing, simply could no longer
    keep up. The crisis reached its first dramatic peak in 1889 in the
    United States, with the realization that the census data from the year
    1880 had not yet been analyzed when the next census was already
    scheduled to take place during the subsequent year. In the same year,
    the Secretary of the Interior organized a conference to investigate
    faster methods of data processing. Two methods were tested for making
    manual labor more efficient, one of which had the potential to achieve
    greater efficiency by means of novel data-processing machines. The
    latter system emerged as the clear victor; developed by an engineer
    named Hermann Hollerith, it mechanically processed and stored data on
    punch cards. The idea was based on Hollerith\'s observations of the
    coup­ling and decoupling of railroad cars, which he interpreted as
    modular units that could be combined in any desired order. The punch
    card transferred this approach to information []{#Page_41
    type="pagebreak" title="41"}management. Data were no longer stored in
    fixed, linear arrangements (tables and lists) but rather in small units
    (the punch cards) that, like railroad cars, could be combined in any
    given way. The increase in efficiency -- with respect to speed *and*
    flexibility -- was enormous, and nearly a hundred of Hollerith\'s
    machines were used by the Census
    Bureau.[^65^](#c1-note-0065){#c1-note-0065a} This marked a turning point
    in the history of information processing, with technical means no longer
    being used exclusively to store data, but to process data as well. This
    was the only way to avoid the impending crisis, ensuring that
    bureaucratic management could maintain centralized control. Hollerith\'s
    machines proved to be a resounding success and were implemented in many
    more branches of government and corporate administration, where
    data-intensive processes had increased so rapidly they could not have
    been managed without such machines. This growth was accompanied by that
    of Hollerith\'s Tabulating Machine Company, which he founded in 1896 and
    which, after a number of mergers, was renamed in 1924 as the
    International Business Machines Corporation (IBM). Throughout the
    following decades, dependence on information-processing machines only
    deepened. The growing number of social, commercial, and military
    processes could only be managed by means of information technology. This
    largely took place, however, outside of public view, namely in the
    specialized divisions of large government and private organizations.
    These were the only institutions in command of the necessary resources
    for operating the complex technical infrastructure -- so-called
    mainframe computers -- that was essential to automatic information
    processing.

    ::: {.section}
    ### The independent media {#c1-sec-0010}

    As with so much else, this situation began to change in the 1960s. Mass
    media and information-processing technologies began to attract
    criticism, even though all of the involved subcultures, media activists,
    and hackers continued to act independently from one another until the
    1990s. The freedom-oriented social movements of the 1960s began to view
    the mass media as part of the political system against which they were
    struggling. The connections among the economy, politics, and the media
    were becoming more apparent, not []{#Page_42 type="pagebreak"
    title="42"}least because many mass media companies, especially those in
    Germany related to the Springer publishing house, were openly inimical
    to these social movements. Critical theor­ies arose that, borrowing
    Louis Althusser\'s influential term, regarded the media as part of the
    "ideological state apparatus"; that is, as one of the authorities whose
    task is to influence people to accept social relations to such a degree
    that the "repressive state apparatuses" (the police, the military, etc.)
    form a constant background in everyday
    life.[^66^](#c1-note-0066){#c1-note-0066a} Similarly influential,
    Antonio Gramsci\'s theory of "cultural hegemony" emphasized the
    condition in which the governed are manipulated to form a cultural
    consensus with the ruling class; they accept the latter\'s
    presuppositions (and the politics which are thus justified) even though,
    by doing so, they are forced to suffer economic
    disadvantages.[^67^](#c1-note-0067){#c1-note-0067a} Guy Debord and the
    Situationists attributed to the media a central role in the new form of
    rule known as "the spectacle," the glittery surfaces and superficial
    manifestations of which served to conceal society\'s true
    relations.[^68^](#c1-note-0068){#c1-note-0068a} In doing so, they
    aligned themselves with the critique of the "culture industry," which
    had been formulated by Max Horkheimer and Theodor W. Adorno at the
    beginning of the 1940s and had become a widely discussed key text by the
    1960s.

    Their differences aside, these perspectives were united in that they no
    longer understood the "public" as a neutral sphere, in which citizens
    could inform themselves freely and form their opinions, but rather as
    something that was created with specific intentions and consequences.
    From this grew an interest in "counter-publics"; that is, in forums
    where other actors could appear and negotiate theories of their own. The
    mass media thus became an important instrument for organizing the
    bourgeois--capitalist public, but they were also responsible for the
    development of alternatives. Media, according to one of the core ideas
    of these new approaches, are less a sphere in which an external reality
    is depicted; rather, they are themselves a constitutive element of
    reality.
    :::

    ::: {.section}
    ### Media as lifeworlds {#c1-sec-0011}

    Another branch of new media theories, that of Marshall McLuhan and the
    Toronto School of Communication,[^69^](#c1-note-0069){#c1-note-0069a}
    []{#Page_43 type="pagebreak" title="43"}reached a similar conclusion on
    different grounds. In 1964, McLuhan aroused a great deal of attention
    with his slogan "the medium is the message." He maintained that every
    medium of communication, by means of its media-specific characteristics,
    directly affected the consciousness, self-perception, and worldview of
    every individual.[^70^](#c1-note-0070){#c1-note-0070a} This, he
    believed, happens independently of and in addition to whatever specific
    message a medium might be conveying. From this perspective, reality does
    not exist outside of media, given that media codetermine our personal
    relation to and behavior in the world. For McLuhan and the Toronto
    School, media were thus not channels for transporting content but rather
    the all-encompassing environments -- galaxies -- in which we live.

    Such ideas were circulating much earlier and were intensively developed
    by artists, many of whom were beginning to experiment with new
    electronic media. An important starting point in this regard was the
    1963 exhibit *Exposition of Music -- Electronic Television* by the
    Korean artist Nam June Paik, who was then collaborating with Karlheinz
    Stockhausen in Düsseldorf. Among other things, Paik presented 12
    television sets, the screens of which were "distorted" by magnets. Here,
    however, "distorted" is a problematic term, for, as Paik explicitly
    noted, the electronic images were "a beautiful slap in the face of
    classic dualism in philosophy since the time of Plato. \[...\] Essence
    AND existence, essentia AND existentia. In the case of the electron,
    however, EXISTENTIA IS ESSENTIA."[^71^](#c1-note-0071){#c1-note-0071a}
    Paik no longer understood the electronic image on the television screen
    as a portrayal or representation of anything. Rather, it engendered in
    the moment of its appearance an autonomous reality beyond and
    independent of its representational function. A whole generation of
    artists began to explore forms of existence in electronic media, which
    they no longer understood as pure media of information. In his work
    *Video Corridor* (1969--70), Bruce Nauman stacked two monitors at the
    end of a corridor that was approximately 10 meters long but only 50
    centimeters wide. On the lower monitor ran a video showing the empty
    hallway. The upper monitor displayed an image captured by a camera
    installed at the entrance of the hall, about 3 meters high. If the
    viewer moved down the corridor toward the two []{#Page_44
    type="pagebreak" title="44"}monitors, he or she would thus be recorded
    by the latter camera. Yet the closer one came to the monitor, the
    farther one would be from the camera, so that one\'s image on the
    monitor would become smaller and smaller. Recorded from behind, viewers
    would thus watch themselves walking away from themselves. Surveillance
    by others, self-surveillance, recording, and disappearance were directly
    and intuitively connected with one another and thematized as fundamental
    issues of electronic media.

    Toward the end of the 1960s, the easier availability and mobility of
    analog electronic production technologies promoted the search for
    counter-publics and the exploration of media as comprehensive
    lifeworlds. In 1967, Sony introduced its first Portapak system: a
    battery-powered, self-contained recording system -- consisting of a
    camera, a cord, and a recorder -- with which it was possible to make
    (black-and-white) video recordings outside of a studio. Although the
    recording apparatus, which required additional devices for editing and
    projection, was offered at the relatively expensive price of \$1,500
    (which corresponds to about €8,000 today), it was still affordable for
    interested groups. Compared with the situation of traditional film
    cameras, these new cameras considerably lowered the initial hurdle for
    media production, for video tapes were not only much cheaper than film
    reels (and could be used for multiple recordings); they also made it
    possible to view recorded material immediately and on location. This
    enabled the production of works that were far more intuitive and
    spontaneous than earlier ones. The 1970s saw the formation of many video
    groups, media workshops, and other initiatives for the independent
    production of electronic media. Through their own distribution,
    festivals, and other channels, such groups created alternative public
    spheres. The latter became especially prominent in the United States
    where, at the end of the 1960s, the providers of cable networks were
    legally obligated to establish public-access channels, on which citizens
    were able to operate self-organized and non-commercial television
    programs. This gave rise to a considerable public-access movement there,
    which at one point extended across 4,000 cities and was responsible for
    producing programs from and for these different
    communities.[^72[]{#Page_45 type="pagebreak"
    title="45"}^](#c1-note-0072){#c1-note-0072a}

    What these initiatives shared in common, in Western Europe and the
    United States, was their attempt to close the gap between the
    consumption and production of media, to activate the public, and at
    least in part to experiment with the media themselves. Non-professional
    producers were empowered with the ability to control who told their
    stories and how this happened. Groups that previously had no access to
    the medial public sphere now had opportunities to represent themselves
    and their own interests. By working together on their own productions,
    such groups demystified the medium of television and simultaneously
    equipped it with a critical consciousness.

    Especially well received in Germany was the work of Hans Magnus
    Enzensberger, who in 1970 argued (on the basis of Bertolt Brecht\'s
    radio theory) in favor of distinguishing between "repressive" and
    "emancipatory" uses of media. For him, the emancipatory potential of
    media lay in the fact that "every receiver is \[...\] a potential
    transmitter" that can participate "interactively" in "collective
    production."[^73^](#c1-note-0073){#c1-note-0073a} In the same year, the
    first German video group, Telewissen, debuted in public with a
    demonstration in downtown Darmstadt. In 1980, at the peak of the
    movement for independent video production, there were approximately a
    hundred such groups throughout (West) Germany. The lack of distribution
    channels, however, represented a nearly insuperable obstacle and ensured
    that many independent productions were seldom viewed outside of
    small-scale settings. Tapes had to be exchanged between groups through
    the mail, and they were mainly shown at gatherings and events, and in
    bars. The dynamic of alternative media shifted toward a small subculture
    (though one networked throughout all of Europe) of pirate radio and
    television broadcasters. At the beginning of the 1980s and in the space
    of Radio Dreyeckland in Freiburg, which had been founded in 1977 as
    Radio Verte Fessenheim, operations began at Germany\'s first pirate or
    citizens\' radio station, which regularly broadcast information about
    the political protest movements that had arisen against the use of
    nuclear power in Fessenheim (France), Wyhl (Germany), and Kaiseraugst
    (Switzerland). The epicenter of the scene, however, was located in
    Amsterdam, where the group known as Rabotnik TV, which was an offshoot
    []{#Page_46 type="pagebreak" title="46"}of the squatter scene there,
    would illegally feed its signal through official television stations
    after their programming had ended at night (many stations then stopped
    broadcasting at midnight). In 1988, the group acquired legal
    broadcasting slots on the cable network and reached up to 50,000 viewers
    with their weekly experimental shows, which largely consisted of footage
    appropriated freely from elsewhere.[^74^](#c1-note-0074){#c1-note-0074a}
    Early in 1990, the pirate television station Kanal X was created in
    Leipzig; it produced its own citizens\' television programming in the
    quasi-lawless milieu of the GDR before
    reunification.[^75^](#c1-note-0075){#c1-note-0075a}

    These illegal, independent, or public-access stations only managed to
    establish themselves as real mass media to a very limited extent.
    Nevertheless, they played an important role in sensitizing an entire
    generation of media activists, whose opportunities expanded as the means
    of production became both better and cheaper. In the name of "tactical
    media," a new generation of artistic and political media activists came
    together in the middle of the
    1990s.[^76^](#c1-note-0076){#c1-note-0076a} They combined the "camcorder
    revolution," which in the late 1980s had made video equipment available
    to broader swaths of society, stirring visions of democratic media
    production, with the newly arrived medium of the internet. Despite still
    struggling with numerous technical difficulties, they remained constant
    in their belief that the internet would solve the hitherto intractable
    problem of distributing content. The transition from analog to digital
    media lowered the production hurdle yet again, not least through the
    ongoing development of improved software. Now, many stages of production
    that had previously required professional or semi-professional expertise
    and equipment could also be carried out by engaged laymen. As a
    consequence, the focus of interest broadened to include not only the
    development of alternative production groups but also the possibility of
    a flexible means of rapid intervention in existing structures. Media --
    both television and the internet -- were understood as environments in
    which one could act without directly representing a reality outside of
    the media. Television was analyzed down to its own legalities, which
    could then be manipulated to affect things beyond the media.
    Increasingly, culture jamming and the campaigns of so-called
    communication guerrillas were blurring the difference between media and
    political activity.[^77[]{#Page_47 type="pagebreak"
    title="47"}^](#c1-note-0077){#c1-note-0077a}

    This difference was dissolved entirely by a new generation of
    politically motivated artists, activists, and hackers, who transferred
    the tactics of civil disobedience -- blockading a building with a
    sit-in, for instance -- to the
    internet.[^78^](#c1-note-0078){#c1-note-0078a} When, in 1994, the
    Zapatista Army of National Liberation rose up in the south of Mexico,
    several media projects were created to support its mostly peaceful
    opposition and to make the movement known in Europe and North America.
    As part of this loose network, in 1998 the American artist collective
    Electronic Disturbance Theater developed a relatively simple computer
    program called FloodNet that enabled networked sympathizers to shut down
    websites, such as those of the Mexican government, in a targeted and
    temporary manner. The principle was easy enough: the program would
    automatic­ally reload a certain website over and over again in order to
    exhaust the capacities of its network
    servers.[^79^](#c1-note-0079){#c1-note-0079a} The goal was not to
    destroy data but rather to disturb the normal functioning of an
    institution in order to draw attention to the activities and interests
    of the protesters.
    :::

    ::: {.section}
    ### Networks as places of action {#c1-sec-0012}

    What this new generation of media activists shared in common with the
    hackers and pioneers of computer networks was the idea that
    communication media are spaces for agency. During the 1960s, these
    programmers were also in search of alternatives. The difference during
    the 1960s is that they did not pursue these alternatives in
    counter-publics, but rather in alternative lifestyles and communication.
    The rejection of bureaucracy as a form of social organization played a
    significant role in the critique of industrial society formulated by
    freedom-oriented social movements. At the beginning of the previous
    century, Max Weber had still regarded bureaucracy as a clear sign of
    progress toward a rational and method­ical
    organization.[^80^](#c1-note-0080){#c1-note-0080a} He based this
    assessment on processes that were impersonal, rule-bound, and
    transparent (in the sense that they were documented with files). But
    now, in the 1960s, bureaucracy was being criticized as soulless,
    alienated, oppressive, non-transparent, and unfit for an increasingly
    complex society. Whereas the first four of these points are in basic
    agreement with Weber\'s thesis about "disenchanting" []{#Page_48
    type="pagebreak" title="48"}the world, the last point represents a
    radical departure from his analysis. Bureaucracies were no longer
    regarded as hyper-efficient but rather as inefficient, and their size
    and rule-bound nature were no longer seen as strengths but rather as
    decisive weaknesses. The social bargain of offering prosperity and
    security in exchange for subordination to hierarchical relations struck
    many as being anything but attractive, and what blossomed instead was a
    broad interest in alternative forms of coexistence. New institutions
    were expected to be more flexible and more open. The desire to step away
    from the system was widespread, and many (mostly young) people set about
    doing exactly that. Alternative ways of life -- communes, shared
    apartments, and cooperatives -- were explored in the country and in
    cities. They were meant to provide the individual with greater autonomy
    and the opportunity to develop his or her own unique potential. Despite
    all of the differences between these concepts of life, they nevertheless
    shared something of a common denominator: the promise of
    reconceptualizing social institutions and the fundamentals of
    coexistence, with the aim of reformulating them in such a way as to
    allow everyone\'s personal potential to develop fully in the here and
    now.

    According to critics of such alternatives, bureaucracy was necessary in
    order to organize social life as it radically reduced the world\'s
    complexity by forcing it through the bottleneck of official procedures.
    However, the price paid for such efficiency involved the atrophying of
    human relationships, which had to be subordinated to rigid processes
    that were incapable of registering unique characteristics and
    differences and were unable to react in a timely manner to changing
    circumstances.

    In the 1960s, many countercultural attempts to find new forms of
    organization placed personal and open communication at the center of
    their efforts. Each individual was understood as a singular person with
    untapped potential rather than a carrier of abstract and clearly defined
    functions. It was soon realized, however, that every common activity and
    every common decision entailed processes that were time-intensive and
    communication-intensive. As soon as a group exceeded a certain size, it
    became practically impossible for it to reach any consensus. As a result
    of these experiences, an entire worldview emerged that propagated
    "smallness" as a central []{#Page_49 type="pagebreak" title="49"}value
    ("small is beautiful"). It was thought that in this way society might
    escape from bureaucracy with its ostensibly disastrous consequences for
    humanity and the environment.[^81^](#c1-note-0081){#c1-note-0081a} But
    this belief did not last for long. For, unlike the majority of European
    alternative movements, the counterculture in the United States was not
    overwhelmingly critical of technology. On the contrary, many actors
    there sought suitable technologies for solving the practical problems of
    social organization. At the end of the 1960s, a considerable amount of
    attention was devoted to the field of basic technological research. This
    field brought together the interests of the military, academics,
    businesses, and activists from the counterculture. The common ground for
    all of them was a cybernetic vision of institutions, or, in the words of
    the historian Fred Turner:

    ::: {.extract}
    a picture of humans and machines as dynamic, collaborating elements in a
    single, highly fluid, socio-technical system. Within that system,
    control emerged not from the mind of a commanding officer, but from the
    complex, probabilistic interactions of humans, machines and events
    around them. Moreover, the mechanical elements of the system in question
    -- in this case, the predictor -- enabled the human elements to achieve
    what all Americans would agree was a worthwhile goal. \[...\] Over the
    coming decades, this second vision of benevolent man-machine systems, of
    circular flows of information, would emerge as a driving force in the
    establishment of the military--industrial--academic complex and as a
    model of an alternative to that
    complex.[^82^](#c1-note-0082){#c1-note-0082a}
    :::

    This complex was possible because, as a theory, cybernetics was
    formulated in extraordinarily abstract terms, so much so that a whole
    variety of competing visions could be associated with
    it.[^83^](#c1-note-0083){#c1-note-0083a} With cybernetics as a
    meta-science, it was possible to investigate the common features of
    technical, social, and biological
    processes.[^84^](#c1-note-0084){#c1-note-0084a} They were analyzed as
    open, interactive, and information-processing systems. It was especially
    consequential that cybernetics defined control and communication as the
    same thing, namely as activities oriented toward informational
    feedback.[^85^](#c1-note-0085){#c1-note-0085a} The heterogeneous legacy
    of cybernetics and its synonymous treatment of the terms "communication"
    and "control" continue to influence information technology and the
    internet today.[]{#Page_50 type="pagebreak" title="50"}

    The various actors who contributed to the development of the internet
    shared a common interest for forms of organ­ization based on the
    comprehensive, dynamic, and open exchange of information. Both on the
    micro and macro level (and this is decisive at this point),
    decentralized and flexible communication technologies were meant to
    become the foundation of new organizational models. Militaries feared
    attacks on their command and communication centers; academics wanted to
    broaden their culture of autonomy, collaboration among peers, and the
    free exchange of information; businesses were looking for new areas of
    activity; and countercultural activists were longing for new forms of
    peaceful coexistence.[^86^](#c1-note-0086){#c1-note-0086a} They all
    rejected the bureaucratic model, and the counterculture provided them
    with the central catchword for their alternative vision: community.
    Though rather difficult to define, it was a powerful and positive term
    that somehow promised the opposite of bureaucracy: humanity,
    cooperation, horizontality, mutual trust, and consensus. Now, however,
    humanity was expected to be reconfigured as a community in cooperation
    with and inseparable from machines. And what was yearned for had become
    a liberating symbiosis of man and machine, an idea that the author
    Richard Brautigan was quick to mock in his poem "All Watched Over by
    Machines of Loving Grace" from 1967:

    ::: {.poem}
    ::: {.lineGroup}
    I like to think (and

    the sooner the better!)

    of a cybernetic meadow

    where mammals and computers

    live together in mutually

    programming harmony

    like pure water

    touching clear sky.[^87^](#c1-note-0087){#c1-note-0087a}
    :::
    :::

    Here, Brautigan is ridiculing both the impatience (*the sooner the
    better!*) and the naïve optimism (*harmony, clear sky*) of the
    countercultural activists. Primarily, he regarded the underlying vision
    as an innocent but amusing fantasy and not as a potential threat against
    which something had to be done. And there were also reasons to believe
    that, ultimately, the new communities would be free from the coercive
    nature that []{#Page_51 type="pagebreak" title="51"}had traditionally
    characterized the downside of community experiences. It was thought that
    the autonomy and freedom of the individual could be regained in and by
    means of the community. The conditions for this were that participation
    in the community had to be voluntary and that the rules of participation
    had to be self-imposed. I will return to this topic in greater detail
    below.

    In line with their solution-oriented engineering culture and the
    results-focused military funders who by and large set the agenda, a
    relatively small group of computer scientists now took it upon
    themselves to establish the technological foundations for new
    institutions. This was not an abstract goal for the distant future;
    rather, they wanted to change everyday practices as soon as possible. It
    was around this time that advanced technology became the basis of social
    communication, which now adopted forms that would have been
    inconceivable (not to mention impracticable) without these
    preconditions. Of course, effective communication technologies already
    existed at the time. Large corporations had begun long before then to
    operate their own computing centers. In contrast to the latter, however,
    the new infrastructure could also be used by individuals outside of
    established institutions and could be implemented for all forms of
    communication and exchange. This idea gave rise to a pragmatic culture
    of horizontal, voluntary cooperation. The clearest summary of this early
    ethos -- which originated at the unusual intersection of military,
    academic, and countercultural interests -- was offered by David D.
    Clark, a computer scientist who for some time coordinated the
    development of technical standards for the internet: "We reject: kings,
    presidents and voting. We believe in: rough consensus and running
    code."[^88^](#c1-note-0088){#c1-note-0088a}

    All forms of classical, formal hierarchies and their methods for
    resolving conflicts -- commands (by kings and presidents) and votes --
    were dismissed. Implemented in their place was a pragmatics of open
    cooperation that was oriented around two guiding principles. The first
    was that different views should be discussed without a single individual
    being able to block any final decisions. Such was the meaning of the
    expression "rough consensus." The second was that, in accordance with
    the classical engineering tradition, the focus should remain on concrete
    solutions that had to be measured against one []{#Page_52
    type="pagebreak" title="52"}another on the basis of transparent
    criteria. Such was the meaning of the expression "running code." In
    large part, this method was possible because the group oriented around
    these principles was, internally, relatively homogeneous: it consisted
    of top-notch computer scientists -- all of them men -- at respected
    American universities and research centers. For this very reason, many
    potential and fundamental conflicts were avoided, at least at first.
    This internal homogeneity lends rather dark undertones to their sunny
    vision, but this was hardly recognized at the time. Today these
    undertones are far more apparent, and I will return to them below.

    Not only were technical protocols developed on the basis of these
    principles, but organizational forms as well. Along with the Internet
    Engineering Task Force (which he directed), Clark created the so-called
    Request-for-Comments documents, with which ideas could be presented to
    interested members of the community and simultaneous feedback could be
    collected in order to work through the ideas in question and thus reach
    a rough consensus. If such a consensus could not be reached -- if, for
    instance, an idea failed to resonate with anyone or was too
    controversial -- then the matter would be dropped. The feedback was
    organized as a form of many-to-many communication through email lists,
    newsgroups, and online chat systems. This proved to be so effective that
    horizontal communication within large groups or between multiple groups
    could take place without resulting in chaos. This therefore invalidated
    the traditional trend that social units, once they reach a certain size,
    would necessarily introduce hierarchical structures for the sake of
    reducing complexity and communication. In other words, the foundations
    were laid for larger numbers of (changing) people to organize flexibly
    and with the aim of building an open consensus. For Manuel Castells,
    this combination of organizational flexibility and scalability in size
    is the decisive innovation that was enabled by the rise of the network
    society.[^89^](#c1-note-0089){#c1-note-0089a} At the same time, however,
    this meant that forms of organization spread that could only be possible
    on the basis of technologies that have formed (and continue to form)
    part of the infrastructure of the internet. Digital technology and the
    social activity of individual users were linked together to an
    unprecedented extent. Social and cultural agendas were now directly
    related []{#Page_53 type="pagebreak" title="53"}to and entangled with
    technical design. Each of the four original interest groups -- the
    military, scientists, businesses, and the counterculture -- implemented
    new technologies to pursue their own projects, which partly complemented
    and partly contradicted one another. As we know today, the first three
    groups still cooperate closely with each other. To a great extent, this
    has allowed the military and corporations, which are willingly supported
    by researchers in need of funding, to determine the technology and thus
    aspects of the social and cultural agendas that depend on it.

    The software developers\' immediate environment experienced its first
    major change in the late 1970s. Software, which for many had been a mere
    supplement to more expensive and highly specialized hardware, became a
    marketable good with stringent licensing restrictions. A new generation
    of businesses, led by Bill Gates, suddenly began to label co­operation
    among programmers as theft.[^90^](#c1-note-0090){#c1-note-0090a}
    Previously it had been par for the course, and above all necessary, for
    programmers to share software with one another. The former culture of
    horizontal cooperation between developers transformed into a
    hierarchical and commercially oriented relation between developers and
    users (many of whom, at least at the beginning, had developed programs
    of their own). For the first time, copyright came to play an important
    role in digital culture. In order to survive in this environment, the
    practice of open cooperation had to be placed on a new legal foundation.
    Copyright law, which served to separate programmers (producers) from
    users (consumers), had to be neutralized or circumvented. The first step
    in this direction was taken in 1984 by the activist and programmer
    Richard Stallman. Composed by Stallman, the GNU General Public License
    was and remains a brilliant hack that uses the letter of copyright law
    against its own spirit. This happens in the form of a license that
    defines "four freedoms":

    1. The freedom to run the program as you wish, for any purpose (freedom
    0).
    2. The freedom to study how the program works and change it so it does
    your computing as you wish (freedom 1).
    3. The freedom to redistribute copies so you can help your neighbor
    (freedom 2).[]{#Page_54 type="pagebreak" title="54"}
    4. The freedom to distribute copies of your modified versions to others
    (freedom 3). By doing this you can give the whole community a chance
    to benefit from your changes.[^91^](#c1-note-0091){#c1-note-0091a}

    Thanks to this license, people who were personally unacquainted and did
    not share a common social environment could now cooperate (freedoms 2
    and 3) and simultaneously remain autonomous and unrestricted (freedoms 0
    and 1). For many, the tension between the need to develop complex
    software in large teams and the desire to maintain one\'s own autonomy
    represented an incentive to try out new forms of
    cooperation.[^92^](#c1-note-0092){#c1-note-0092a}

    Stallman\'s influence was at first limited to a small circle of
    programmers. In the middle of the 1980s, the goal of developing a
    completely free operating system seemed a distant one. Communication
    between those interested in doing so was often slow and complicated. In
    part, program codes still had to be sent by mail. It was not until the
    beginning of the 1990s that students in technical departments at many
    universities could access the
    internet.[^93^](#c1-note-0093){#c1-note-0093a} One of the first to use
    these new opportunities in an innovative way was a Finnish student named
    Linus Torvalds. He built upon Stallman\'s work and programmed a kernel,
    which, as the most important module of an operating system, governs the
    interaction between hardware and software. He published the first free
    version of this in 1991 and encouraged anyone interested to give him
    feedback.[^94^](#c1-note-0094){#c1-note-0094a} And it poured in.
    Torvalds reacted promptly and issued new versions of his software in
    quick succession. Instead of understanding his software as a finished
    product, he treated it like an open-ended process. This, in turn,
    motiv­ated even more developers to participate, because they saw that
    their contributions were being adopted swiftly, which led to the
    formation of an open community of interested programmers who swapped
    ideas over the internet and continued writing software. In order to
    maintain an overview of the different versions of the program, which
    appeared in parallel with one another, it soon became necessary to
    employ specialized platforms. The fusion of social processes --
    horizontal and voluntary cooperation among developers -- and
    technological platforms, which enabled this form of cooperation
    []{#Page_55 type="pagebreak" title="55"}by providing archives, filter
    functions, and search capabil­ities that made it possible to organize
    large amounts of data, was thus advanced even further. The programmers
    were no longer primarily working on the development of the internet
    itself, which by then was functioning quite reliably, but were rather
    using the internet to apply their cooperative principles to other
    arenas. By the end of the 1990s, the free-software movement had
    established a new, internet-based form of organization and had
    demonstrated its efficiency in practice: horizontal, informal
    communities of actors -- voluntary, autonomous, and focused on a common
    interest -- that, on the basis of high-tech infrastructure, could
    include thousands of people without having to create formal hierarchies.
    :::
    :::

    ::: {.section}
    From the Margins to the Center of Society {#c1-sec-0013}
    -----------------------------------------

    It was around this same time that the technologies in question, which
    were already no longer very new, entered mainstream society. Within a
    few years, the internet became part of everyday life. Three years before
    the turn of the millennium, only about 6 percent of the entire German
    population used the internet, often only occasionally. Three years after
    the millennium, the number of users already exceeded 53 percent. Since
    then, this share has increased even further. In 2014, it was more than
    97 percent for people under the age of
    40.[^95^](#c1-note-0095){#c1-note-0095a} Parallel to these developments,
    data transfer rates increased considerably, broadband connections ousted
    the need for dial-up modems, and the internet was suddenly "here" and no
    longer "there." With the spread of mobile devices, especially since the
    year 2007 when the first iPhone was introduced, digital communication
    became available both extensively and continuously. Since then, the
    internet has been ubiquitous. The amount of time that users spend online
    has increased and, with the rapid ascent of social mass media such as
    Facebook, people have been online in almost every situation and
    circumstance in life.[^96^](#c1-note-0096){#c1-note-0096a} The internet,
    like water or electricity, has become for many people a utility that is
    simply taken for granted.

    In a BBC survey from 2010, 80 percent of those polled believed that
    internet access -- a precondition for participating []{#Page_56
    type="pagebreak" title="56"}in the now dominant digital condition --
    should be regarded as a fundamental human right. This idea was most
    popular in South Korea (96 percent) and Mexico (94 percent), while in
    Germany at least 72 percent were of the same
    opinion.[^97^](#c1-note-0097){#c1-note-0097a}

    On the basis of this new infrastructure, which is now relevant in all
    areas of life, the cultural developments described above have been
    severed from the specific historical conditions from which they emerged
    and have permeated society as a whole. Expressivity -- the ability to
    communicate something "unique" -- is no longer a trait of artists and
    know­ledge workers alone, but rather something that is required by an
    increasingly broader stratum of society and is already being taught in
    schools. Users of social mass media must produce (themselves). The
    development of specific, differentiated identities and the demand that
    each be treated equally are no longer promoted exclusively by groups who
    have to struggle against repression, existential threats, and
    marginalization, but have penetrated deeply into the former mainstream,
    not least because the present forms of capitalism have learned to profit
    from the spread of niches and segmentation. When even conservative
    parties have abandoned the idea of a "leading culture," then cultural
    differences can no longer be classified by enforcing an absolute and
    indisputable hierarchy, the top of which is occupied by specific
    (geographical and cultural) centers. Rather, a space has been opened up
    for endless negotiations, a space in which -- at least in principle --
    everything can be called into question. This is not, of course, a
    peaceful and egalitarian process. In addition to the practical hurdles
    that exist in polarizing societies, there are also violent backlashes
    and new forms of fundamentalism that are attempting once again to remove
    certain religious, social, cultural, or political dimensions of
    existence from the discussion. Yet these can only be understood in light
    of a sweeping cultural transformation that has already reached
    mainstream society.[^98^](#c1-note-0098){#c1-note-0098a} In other words,
    the digital condition has become quotidian and dominant. It forms a
    cultural constellation that determines all areas of life, and its
    characteristic features are clearly recognizable. These will be the
    focus of the next chapter.[]{#Page_57 type="pagebreak" title="57"}
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#c1-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#c1-note-0001a){#c1-note-0001}  Kathrin Passig and Sascha Lobo,
    *Internet: Segen oder Fluch* (Berlin: Rowohlt, 2012) \[--trans.\].

    [2](#c1-note-0002a){#c1-note-0002}  The expression "heteronormatively
    behaving" is used here to mean that, while in the public eye, the
    behavior of the people []{#Page_177 type="pagebreak" title="177"}in
    question conformed to heterosexual norms regardless of their personal
    sexual orientations.

    [3](#c1-note-0003a){#c1-note-0003}  No order is ever entirely closed
    off. In this case, too, there was also room for exceptions and for
    collective moments of greater cultural multiplicity. That said, the
    social openness of the end of the 1920s, for instance, was restricted to
    particular milieus within large cities and was accordingly short-lived.

    [4](#c1-note-0004a){#c1-note-0004}  Fritz Machlup, *The Political
    Economy of Monopoly: Business, Labor and Government Policies*
    (Baltimore, MD: The Johns Hopkins University Press, 1952).

    [5](#c1-note-0005a){#c1-note-0005}  Machlup was a student of Ludwig von
    Mises, the most influential representative of this radically
    individualist school. See Hans-Hermann Hoppe, "Die Österreichische
    Schule und ihre Bedeutung für die moderne Wirtschaftswissenschaft," in
    Karl-Dieter Grüske (ed.), *Die Gemeinwirtschaft: Kommentarband zur
    Neuauflage von Ludwig von Mises' "Die Gemeinwirtschaft"* (Düsseldorf:
    Verlag Wirtschaft und Finanzen, 1996), pp. 65--90.

    [6](#c1-note-0006a){#c1-note-0006}  Fritz Machlup, *The Production and
    Distribution of Knowledge in the United States* (New York: John Wiley &
    Sons, 1962).

    [7](#c1-note-0007a){#c1-note-0007}  The term "knowledge worker" had
    already been introduced to the discussion a few years before; see Peter
    Drucker, *Landmarks of Tomorrow: A Report on the New* (New York: Harper,
    1959).

    [8](#c1-note-0008a){#c1-note-0008}  Peter Ecker, "Die
    Verwissenschaftlichung der Industrie: Zur Geschichte der
    Industrieforschung in den europäischen und amerikanischen
    Elektrokonzernen 1890--1930," *Zeitschrift für Unternehmensgeschichte*
    35 (1990): 73--94.

    [9](#c1-note-0009a){#c1-note-0009}  Edward Bernays was the son of
    Sigmund Freud\'s sister Anna and Ely Bernays, the brother of Freud\'s
    wife, Martha Bernays.

    [10](#c1-note-0010a){#c1-note-0010}  Edward L. Bernays, *Propaganda*
    (New York: Horace Liverlight, 1928).

    [11](#c1-note-0011a){#c1-note-0011}  James Beniger, *The Control
    Revolution: Technological and Economic Origins of the Information
    Society* (Cambridge, MA: Harvard University Press, 1986), p. 350.

    [12](#c1-note-0012a){#c1-note-0012}  Norbert Wiener, *Cybernetics: Or
    Control and Communication in the Animal and the Machine* (New York: J.
    Wiley, 1948).

    [13](#c1-note-0013a){#c1-note-0013}  Daniel Bell, *The Coming of
    Post-Industrial Society: A Venture in Social Forecasting* (New York:
    Basic Books, 1973).

    [14](#c1-note-0014a){#c1-note-0014}  Simon Nora and Alain Minc, *The
    Computerization of Society: A Report to the President of France*
    (Cambridge, MA: MIT Press, 1980).

    [15](#c1-note-0015a){#c1-note-0015}  Manuel Castells, *The Rise of the
    Network Society* (Oxford: Blackwell, 1996).

    [16](#c1-note-0016a){#c1-note-0016}  Hans-Dieter Kübler, *Mythos
    Wissensgesellschaft: Gesellschaft­licher Wandel zwischen Information,
    Medien und Wissen -- Eine Einführung* (Wiesbaden: Verlag für
    Sozialwissenschaften, 2009).[]{#Page_178 type="pagebreak" title="178"}

    [17](#c1-note-0017a){#c1-note-0017}  Luc Boltanski and Ève Chiapello,
    *The New Spirit of Capitalism*, trans. Gregory Elliott (London: Verso,
    2005).

    [18](#c1-note-0018a){#c1-note-0018}  Michael Piore and Charles Sabel,
    *The Second Industrial Divide: Possibilities of Prosperity* (New York:
    Basic Books, 1984).

    [19](#c1-note-0019a){#c1-note-0019}  Castells, *The Rise of the Network
    Society*. For a critical evaluation of Castells\'s work, see Felix
    Stalder, *Manuel Castells and the Theory of the Network Society*
    (Cambridge: Polity, 2006).

    [20](#c1-note-0020a){#c1-note-0020}  "UK Creative Industries Mapping
    Documents" (1998); quoted from Terry Flew, *The Creative Industries:
    Culture and Policy* (Los Angeles, CA: Sage, 2012), pp. 9--10.

    [21](#c1-note-0021a){#c1-note-0021}  The rise of the creative
    industries, and the hope that they inspired among politicians, did not
    escape criticism. Among the first works to draw attention to the
    precarious nature of working in such industries was Angela McRobbie\'s
    *British Fashion Design: Rag Trade or Image Industry?* (New York:
    Routledge, 1998).

    [22](#c1-note-0022a){#c1-note-0022}  This definition is not without a
    degree of tautology, given that economic growth is based on talent,
    which itself is defined by its ability to create new jobs; that is,
    economic growth. At the same time, he employs the term "talent" in an
    extremely narrow sense. Apparently, if something has nothing to do with
    job creation, it also has nothing to do with talent or creativity. All
    forms of creativity are thus measured and compared according to a common
    criterion.

    [23](#c1-note-0023a){#c1-note-0023}  Richard Florida, *Cities and the
    Creative Class* (New York: Routledge, 2005), p. 5.

    [24](#c1-note-0024a){#c1-note-0024}  One study has reached the
    conclusion that, despite mass participation, "a new form of
    communicative elite has developed, namely digitally and technically
    versed actors who inform themselves in this way, exchange ideas and thus
    gain influence. For them, the possibilities of platforms mainly
    represent an expansion of useful tools. Above all, the dissemination of
    digital technology makes it easier for versed and highly networked
    individuals to convey their news more simply -- and, for these groups of
    people, it lowers the threshold for active participation." Michael
    Bauer, "Digitale Technologien und Partizipation," in Clara Landler et
    al. (eds), *Netzpolitik in Österreich: Internet, Macht, Menschenrechte*
    (Krems: Donau-Universität Krems, 2013), pp. 219--24, at 224
    \[--trans.\].

    [25](#c1-note-0025a){#c1-note-0025}  Boltanski and Chiapello, *The New
    Spirit of Capitalism*.

    [26](#c1-note-0026a){#c1-note-0026}  According to Wikipedia,
    "Heteronormativity is the belief that people fall into distinct and
    complementary genders (man and woman) with natural roles in life. It
    assumes that heterosexuality is the only sexual orientation or only
    norm, and states that sexual and marital relations are most (or only)
    fitting between people of opposite sexes."[]{#Page_179 type="pagebreak"
    title="179"}

    [27](#c1-note-0027a){#c1-note-0027}  Jannis Plastargias, *RotZSchwul:
    Der Beginn einer Bewegung (1971--1975)* (Berlin: Querverlag, 2015).

    [28](#c1-note-0028a){#c1-note-0028}  Helmut Ahrens et al. (eds),
    *Tuntenstreit: Theoriediskussion der Homosexuellen Aktion Westberlin*
    (Berlin: Rosa Winkel, 1975), p. 4.

    [29](#c1-note-0029a){#c1-note-0029}  Susanne Regener and Katrin Köppert
    (eds), *Privat/öffentlich: Mediale Selbstentwürfe von Homosexualität*
    (Vienna: Turia + Kant, 2013).

    [30](#c1-note-0030a){#c1-note-0030}  Such, for instance, was the
    assessment of Manfred Bruns, the spokesperson for the Lesbian and Gay
    Association in Germany, in his text "Schwulenpolitik früher" (link no
    longer active). From today\'s perspective, however, the main problem
    with this event was the unclear position of the Green Party with respect
    to pedophilia. See Franz Walter et al. (eds), *Die Grünen und die
    Pädosexualität: Eine bundesdeutsche Geschichte* (Göttingen: Vandenhoeck
    & Ruprecht, 2014).

    [31](#c1-note-0031a){#c1-note-0031}  "AIDS: Tödliche Seuche," *Der
    Spiegel* 23 (1983) \[--trans.\].

    [32](#c1-note-0032a){#c1-note-0032}  Quoted from Frank Niggemeier, "Gay
    Pride: Schwules Selbst­bewußtsein aus dem Village," in Bernd Polster
    (ed.), *West-Wind: Die Amerikanisierung Europas* (Cologne: Dumont,
    1995), pp. 179--87, at 184 \[--trans.\].

    [33](#c1-note-0033a){#c1-note-0033}  Quoted from Regener and Köppert,
    *Privat/öffentlich*, p. 7 \[--trans.\].

    [34](#c1-note-0034a){#c1-note-0034}  Hans-Peter Buba and László A.
    Vaskovics, *Benachteiligung gleichgeschlechtlich orientierter Personen
    und Paare: Studie im Auftrag des Bundesministerium der Justiz* (Cologne:
    Bundes­anzeiger, 2001).

    [35](#c1-note-0035a){#c1-note-0035}  This process of internal
    differentiation has not yet reached its conclusion, and thus the
    acronyms have become longer and longer: LGBPTTQQIIAA+ stands for
    "lesbian, gay, bisexual, pansexual, transgender, transsexual, queer,
    questioning, intersex, intergender, asexual, ally."

    [36](#c1-note-0036a){#c1-note-0036}  Judith Butler, *Gender Trouble:
    Feminism and the Subversion of Identity* (New York: Routledge, 1989).

    [37](#c1-note-0037a){#c1-note-0037}  Andreas Krass, "Queer Studies: Eine
    Einführung," in Krass (ed.), *Queer denken: Gegen die Ordnung der
    Sexualität* (Frankfurt am Main: Suhrkamp, 2003), pp. 7--27.

    [38](#c1-note-0038a){#c1-note-0038}  Edward W. Said, *Orientalism* (New
    York: Vintage Books, 1978).

    [39](#c1-note-0039a){#c1-note-0039}  Kark August Wittfogel, *Oriental
    Despotism: A Comparative Study of Total Power* (New Haven, CT: Yale
    University Press, 1957).

    [40](#c1-note-0040a){#c1-note-0040}  Silke Förschler, *Bilder des Harem:
    Medienwandel und kultereller Austausch* (Berlin: Reimer, 2010).

    [41](#c1-note-0041a){#c1-note-0041}  The selection and effectiveness of
    these images is not a coincidence. Camel was one of the first brands of
    cigarettes for []{#Page_180 type="pagebreak" title="180"}which
    advertising, in the sense described above, was used in a systematic
    manner.

    [42](#c1-note-0042a){#c1-note-0042}  This would not exclude feelings of
    regret about the loss of an exotic and romantic way of life, such as
    those of T. E. Lawrence, whose activities in the Near East during the
    First World War were memorialized in the film *Lawrence of Arabia*
    (1962).

    [43](#c1-note-0043a){#c1-note-0043}  Said has often been criticized,
    however, for portraying orientalism so dominantly that there seems to be
    no way out of the existing dependent relations. For an overview of the
    debates that Said has instigated, see María do Mar Castro Varela and
    Nikita Dhawan, *Postkoloniale Theorie: Eine kritische Ein­führung*
    (Bielefeld: Transcript, 2005), pp. 37--46.

    [44](#c1-note-0044a){#c1-note-0044}  "Migration führt zu 'hybrider'
    Gesellschaft" (an interview with Homi K. Bhabha), *ORF Science*
    (November 9, 2007), online \[--trans.\].

    [45](#c1-note-0045a){#c1-note-0045}  Homi K. Bhabha, *The Location of
    Culture* (New York: Routledge, 1994), p. 4.

    [46](#c1-note-0046a){#c1-note-0046}  Elisabeth Bronfen and Benjamin
    Marius, "Hybride Kulturen: Einleitung zur anglo-amerikanischen
    Multikulturismusdebatte," in Bronfen et al. (eds), *Hybride Kulturen*
    (Tübingen: Stauffenburg), pp. 1--30, at 8 \[--trans.\].

    [47](#c1-note-0047a){#c1-note-0047}  "What Is Postcolonial Thinking? An
    Interview with Achille Mbembe," *Eurozine* (December 2006), online.

    [48](#c1-note-0048a){#c1-note-0048}  Migrants have always created their
    own culture, which deals in various ways with the experience of
    migration itself, but non-migrant populations have long tended to ignore
    this. Things have now begun to change in this regard, for instance
    through Imra Ayata and Bülent Kullukcu\'s compilation of songs by the
    Turkish diaspora of the 1970s and 1980s: *Songs of Gastarbeiter*
    (Munich: Trikont, 2013).

    [49](#c1-note-0049a){#c1-note-0049}  The conference programs can be
    found at: \<\>.

    [50](#c1-note-0050a){#c1-note-0050}  "Deutschland entwickelt sich zu
    einem attraktiven Einwanderungsland für hochqualifizierte Zuwanderer,"
    press release by the CDU/CSU Alliance in the German Parliament (June 4,
    2014), online \[--trans.\].

    [51](#c1-note-0051a){#c1-note-0051}  Andreas Reckwitz, *Die Erfindung
    der Kreativität: Zum Prozess gesellschaftlicher Ästhetisierung* (Berlin:
    Suhrkamp, 2011), p. 180 \[--trans.\]. An English translation of this
    book is forthcoming: *The Invention of Creativity: Modern Society and
    the Culture of the New*, trans. Steven Black (Cambridge: Polity, 2017).

    [52](#c1-note-0052a){#c1-note-0052}  Gert Selle, *Geschichte des Design
    in Deutschland* (Frankfurt am Main: Campus, 2007).

    [53](#c1-note-0053a){#c1-note-0053}  "Less Is More: The Design Ethos of
    Dieter Rams," *SFMOMA* (June 29, 2011), online.[]{#Page_181
    type="pagebreak" title="181"}

    [54](#c1-note-0054a){#c1-note-0054}  The cybernetic perspective was
    introduced to the field of design primarily by Buckminster Fuller. See
    Diedrich Diederichsen and Anselm Franke, *The Whole Earth: California
    and the Disappearance of the Outside* (Berlin: Sternberg, 2013).

    [55](#c1-note-0055a){#c1-note-0055}  Clive Dilnot, "Design as a Socially
    Significant Activity: An Introduction," *Design Studies* 3/3 (1982):
    139--46.

    [56](#c1-note-0056a){#c1-note-0056}  Victor J. Papanek, *Design for the
    Real World: Human Ecology and Social Change* (New York: Pantheon, 1972),
    p. 2.

    [57](#c1-note-0057a){#c1-note-0057}  Reckwitz, *Die Erfindung der
    Kreativität*.

    [58](#c1-note-0058a){#c1-note-0058}  B. Joseph Pine and James H.
    Gilmore, *The Experience Economy: Work Is Theater and Every Business Is
    a Stage* (Boston, MA: Harvard Business School Press, 1999), p. ix (the
    emphasis is original).

    [59](#c1-note-0059a){#c1-note-0059}  Mona El Khafif, *Inszenierter
    Urbanismus: Stadtraum für Kunst, Kultur und Konsum im Zeitalter der
    Erlebnisgesellschaft* (Saarbrücken: VDM Verlag Dr. Müller, 2013).

    [60](#c1-note-0060a){#c1-note-0060}  Konrad Becker and Martin Wassermair
    (eds), *Phantom Kulturstadt* (Vienna: Löcker, 2009).

    [61](#c1-note-0061a){#c1-note-0061}  See, for example, Andres Bosshard,
    *Stadt hören: Klang­spaziergänge durch Zürich* (Zurich: NZZ Libro,
    2009).

    [62](#c1-note-0062a){#c1-note-0062}  "An alternate realty game (ARG),"
    according to Wikipedia, "is an interactive networked narrative that uses
    the real world as a platform and employs transmedia storytelling to
    deliver a story that may be altered by players\' ideas or actions."

    [63](#c1-note-0063a){#c1-note-0063}  Eric von Hippel, *Democratizing
    Innovation* (Cambridge, MA: MIT Press, 2005).

    [64](#c1-note-0064a){#c1-note-0064}  It is often the case that the
    involvement of users simply serves to increase the efficiency of
    production processes and customer service. Many activities that were
    once undertaken at the expense of businesses now have to be carried out
    by the customers themselves. See Günter Voss, *Der arbeitende Kunde:
    Wenn Konsumenten zu unbezahlten Mitarbeitern werden* (Frankfurt am Main:
    Campus, 2005).

    [65](#c1-note-0065a){#c1-note-0065}  Beniger, *The Control Revolution*,
    pp. 411--16.

    [66](#c1-note-0066a){#c1-note-0066}  Louis Althusser, "Ideology and
    Ideological State Apparatuses (Notes towards an Investigation)," in
    Althusser, *Lenin and Philosophy and Other Essays*, trans. Ben Brewster
    (New York: Monthly Review Press, 1971), pp. 127--86.

    [67](#c1-note-0067a){#c1-note-0067}  Florian Becker et al. (eds),
    *Gramsci lesen! Einstiege in die Gefängnis­hefte* (Hamburg: Argument,
    2013), pp. 20--35.

    [68](#c1-note-0068a){#c1-note-0068}  Guy Debord, *The Society of the
    Spectacle*, trans. Fredy Perlman and Jon Supak (Detroit: Black & Red,
    1977).

    [69](#c1-note-0069a){#c1-note-0069}  Derrick de Kerckhove, "McLuhan and
    the Toronto School of Communication," *Canadian Journal of
    Communication* 14/4 (1989): 73--9.[]{#Page_182 type="pagebreak"
    title="182"}

    [70](#c1-note-0070a){#c1-note-0070}  Marshall McLuhan, *Understanding
    Media: The Extensions of Man* (New York: McGraw-Hill, 1964).

    [71](#c1-note-0071a){#c1-note-0071}  Nam Jun Paik, "Exposition of Music
    -- Electronic Television" (leaflet accompanying the exhibition). Quoted
    from Zhang Ga, "Sounds, Images, Perception and Electrons," *Douban*
    (March 3, 2016), online.

    [72](#c1-note-0072a){#c1-note-0072}  Laura R. Linder, *Public Access
    Television: America\'s Electronic Soapbox* (Westport, CT: Praeger,
    1999).

    [73](#c1-note-0073a){#c1-note-0073}  Hans Magnus Enzensberger,
    "Constituents of a Theory of the Media," in Noah Wardrip-Fruin and Nick
    Montfort (eds), *The New Media Reader* (Cambridge, MA: MIT Press, 2003),
    pp. 259--75.

    [74](#c1-note-0074a){#c1-note-0074}  Paul Groot, "Rabotnik TV,"
    *Mediamatic* 2/3 (1988), online.

    [75](#c1-note-0075a){#c1-note-0075}  Inke Arns, "Social Technologies:
    Deconstruction, Subversion and the Utopia of Democratic Communication,"
    *Medien Kunst Netz* (2004), online.

    [76](#c1-note-0076a){#c1-note-0076}  The term was coined at a series of
    conferences titled The Next Five Minutes (N5M), which were held in
    Amsterdam from 1993 to 2003. See \<\>.

    [77](#c1-note-0077a){#c1-note-0077}  Mark Dery, *Culture Jamming:
    Hacking, Slashing and Sniping in the Empire of Signs* (Westfield: Open
    Media, 1993); Luther Blisset et al., *Handbuch der
    Kommunikationsguerilla*, 5th edn (Berlin: Assoziationen A, 2012).

    [78](#c1-note-0078a){#c1-note-0078}  Critical Art Ensemble, *Electronic
    Civil Disobedience and Other Unpopular Ideas* (New York: Autonomedia,
    1996).

    [79](#c1-note-0079a){#c1-note-0079}  Today this method is known as a
    "distributed denial of service attack" (DDOS).

    [80](#c1-note-0080a){#c1-note-0080}  Max Weber, *Economy and Society: An
    Outline of Interpretive Sociology*, trans. Guenther Roth and Claus
    Wittich (Berkeley, CA: University of California Press, 1978), pp. 26--8.

    [81](#c1-note-0081a){#c1-note-0081}  Ernst Friedrich Schumacher, *Small
    Is Beautiful: Economics as if People Mattered*, 8th edn (New York:
    Harper Perennial, 2014).

    [82](#c1-note-0082a){#c1-note-0082}  Fred Turner, *From Counterculture
    to Cyberculture: Stewart Brand, the Whole Earth Movement and the Rise of
    Digital Utopianism* (Chicago, IL: University of Chicago Press, 2006), p.
    21. In this regard, see also the documentary films *Das Netz* by Lutz
    Dammbeck (2003) and *All Watched Over by Machines of Loving Grace* by
    Adam Curtis (2011).

    [83](#c1-note-0083a){#c1-note-0083}  It was possible to understand
    cybernetics as a language of free markets or also as one of centralized
    planned economies. See Slava Gerovitch, *From Newspeak to Cyberspeak: A
    History of Soviet Cybernetics* (Cambridge, MA: MIT Press, 2002). The
    great interest of Soviet scientists in cybernetics rendered the term
    rather suspicious in the West, where it was disassociated from
    artificial intelligence.[]{#Page_183 type="pagebreak" title="183"}

    [84](#c1-note-0084a){#c1-note-0084}  Claus Pias, "The Age of
    Cybernetics," in Pias (ed.), *Cybernetics: The Macy Conferences
    1946--1953* (Zurich: Diaphanes, 2016), pp. 11--27.

    [85](#c1-note-0085a){#c1-note-0085}  Norbert Wiener, one of the
    cofounders of cybernetics, explained this as follows in 1950: "In giving
    the definition of Cybernetics in the original book, I classed
    communication and control together. Why did I do this? When I
    communicate with another person, I impart a message to him, and when he
    communicates back with me he returns a related message which contains
    information primarily accessible to him and not to me. When I control
    the actions of another person, I communicate a message to him, and
    although this message is in the imperative mood, the technique of
    communication does not differ from that of a message of fact.
    Furthermore, if my control is to be effective I must take cognizance of
    any messages from him which may indicate that the order is understood
    and has been obeyed." Norbert Wiener, *The Human Use of Human Beings:
    Cybernetics and Society*, 2nd edn (London: Free Association Books,
    1989), p. 16.

    [86](#c1-note-0086a){#c1-note-0086}  Though presented here as distinct,
    these interests could in fact be held by one and the same person. In
    *From Counterculture to Cyberculture*, for instance, Turner discusses
    "countercultural entrepreneurs."

    [87](#c1-note-0087a){#c1-note-0087}  Richard Brautigan, "All Watched
    Over by Machines of Loving Grace," in *All Watched Over by Machines of
    Loving Grace*, by Brautigan (San Francisco: The Communication Company,
    1967).

    [88](#c1-note-0088a){#c1-note-0088}  David D. Clark, "A Cloudy Crystal
    Ball: Visions of the Future," *Internet Engineering Taskforce* (July
    1992), online.

    [89](#c1-note-0089a){#c1-note-0089}  Castells, *The Rise of the Network
    Society*.

    [90](#c1-note-0090a){#c1-note-0090}  Bill Gates, "An Open Letter to
    Hobbyists," *Homebrew Computer Club Newsletter* 2/1 (1976): 2.

    [91](#c1-note-0091a){#c1-note-0091}  Richard Stallman, "What Is Free
    Software?", *GNU Operating System*, online.

    [92](#c1-note-0092a){#c1-note-0092}  The fundamentally cooperative
    nature of programming was recognized early on. See Gerald M. Weinberg,
    *The Psychology of Computer Programming*, rev. edn (New York: Dorset
    House, 1998 \[originally published in 1971\]).

    [93](#c1-note-0093a){#c1-note-0093}  On the history of free software,
    see Volker Grassmuck, *Freie Software: Zwischen Privat- und
    Gemeineigentum* (Berlin: Bundeszentrale für politische Bildung, 2002).

    [94](#c1-note-0094a){#c1-note-0094}  In his first email on the topic, he
    wrote: "Hello everybody out there \[...\]. I'm doing a (free) operating
    system (just a hobby, won\'t be big and professional like gnu) \[...\].
    This has been brewing since April, and is starting to get ready. I\'d
    like any feedback on things people like/dislike." Linus Torvalds, "What
    []{#Page_184 type="pagebreak" title="184"}Would You Like to See Most in
    Minix," *Usenet Group* (August 1991), online.

    [95](#c1-note-0095a){#c1-note-0095}  ARD/ZDF, "Onlinestudie" (2015),
    online.

    [96](#c1-note-0096a){#c1-note-0096}  From 1997 to 2003, the average use
    of online media in Germany climbed from 76 to 138 minutes per day, and
    by 2013 it reached 169 minutes. Over the same span of time, the average
    frequency of use increased from 3.3 to 4.4 days per week, and by 2013 it
    was 5.8. From 2007 to 2013, the percentage of people who were members of
    private social networks like Facebook grew from 15 percent to 46
    percent. Of these, nearly 60 percent -- around 19 million people -- used
    such services on a daily basis. The source of this information is the
    article cited in the previous note.

    [97](#c1-note-0097a){#c1-note-0097}  "Internet Access Is 'a Fundamental
    Right'," *BBC News* (8 March 2010), online.

    [98](#c1-note-0098a){#c1-note-0098}  Manuel Castells, *The Power of
    Identity* (Oxford: Blackwell, 1997), pp. 7--22.
    :::
    :::

    [II]{.chapterNumber} [Forms]{.chapterTitle} {#c2}
  • ::: {.section}
    With the emergence of the internet around the turn of the millennium as
    an omnipresent infrastructure for communication and coordination,
    previously independent cultural developments began to spread beyond
    their specific original contexts, mutually influencing and enhancing one
    another, and becoming increasingly intertwined. Out of a disconnected
    conglomeration of more or less marginalized practices, a new and
    specific cultural environment thus took shape, usurping or marginalizing
    an ever greater variety of cultural constellations. The following
    discussion will focus on three *forms* of the digital condition; that
    is, on those formal qualities that (notwithstanding all of its internal
    conflicts and contradictions) lend a particular shape to this cultural
    environment as a whole: *referentiality*, *communality*, and
    *algorithmicity*. It is only because most of the cultural processes
    operating under the digital condition are characterized by common formal
    features such as these that it is reasonable to speak of the digital
    condition in the singular.

    "Referentiality" is a method with which individuals can inscribe
    themselves into cultural processes and constitute themselves as
    producers. Understood as shared social meaning, the arena of culture
    entails that such an undertaking cannot be limited to the individual.
    Rather, it takes place within a larger framework whose existence and
    development depend on []{#Page_58 type="pagebreak" title="58"}communal
    formations. "Algorithmicity" denotes those aspects of cultural processes
    that are (pre-)arranged by the activities of machines. Algorithms
    transform the vast quantities of data and information that characterize
    so many facets of present-day life into dimensions and formats that can
    be registered by human perception. It is impossible to read the content
    of billions of websites. Therefore we turn to services such as Google\'s
    search algorithm, which reduces the data flood ("big data") to a
    manageable amount and translates it into a format that humans can
    understand ("small data"). Without them, human beings could not
    comprehend or do anything within a culture built around digital
    technologies, but they influence our understanding and activity in an
    ambivalent way. They create new dependencies by pre-sorting and making
    the (informational) world available to us, yet simultaneously ensure our
    autonomy by providing the preconditions that enable us to act.
    :::

    ::: {.section}
    Referentiality {#c2-sec-0002}
    --------------

    In the digital condition, one of the methods (if not *the* most
    fundamental method) enabling humans to participate -- alone or in groups
    -- in the collective negotiation of meaning is the system of creating
    references. In a number of arenas, referential processes play an
    important role in the assignment of both meaning and form. According to
    the art historian André Rottmann, for instance, "one might claim that
    working with references has in recent years become the dominant
    production-aesthetic model in contemporary
    art."[^1^](#c2-note-0001){#c2-note-0001a} This burgeoning engagement
    with references, however, is hardly restricted to the world of
    contemporary art. Referentiality is a feature of many processes that
    encompass the operations of various genres of professional and everyday
    culture. In its essence, it is the use of materials that are already
    equipped with meaning -- as opposed to so-called raw material -- to
    create new meanings. The referential techniques used to achieve this are
    extremely diverse, a fact reflected in the numerous terms that exist to
    describe them: re-mix, re-make, re-enactment, appropriation, sampling,
    meme, imitation, homage, tropicália, parody, quotation, post-production,
    re-performance, []{#Page_59 type="pagebreak" title="59"}camouflage,
    (non-academic) research, re-creativity, mashup, transformative use, and
    so on.

    These processes have two important aspects in common: the
    recognizability of the sources and the freedom to deal with them however
    one likes. The first creates an internal system of references from which
    meaning and aesthetics are derived in an essential
    manner.[^2^](#c2-note-0002){#c2-note-0002a} The second is the
    precondition enabling the creation of something that is both new and on
    the same level as the re-used material. This represents a clear
    departure from the historical--critical method, which endeavors to embed
    a source in its original context in order to re-determine its meaning,
    but also a departure from classical forms of rendition such as
    translations, adaptations (for instance, adapting a book for a film), or
    cover versions, which, though they translate a work into another
    language or medium, still attempt to preserve its original meaning.
    Re-mixes produced by DJs are one example of the referential treatment of
    source material. In his book on the history of DJ culture, the
    journalist Ulf Poschardt notes: "The remixer isn\'t concerned with
    salvaging authenticity, but with creating a new
    authenticity."[^3^](#c2-note-0003){#c2-note-0003a} For instead of
    distancing themselves from the past, which would follow the (Western)
    logic of progress or the spirit of the avant-garde, these processes
    refer explicitly to precursors and to existing material. In one and the
    same gesture, both one\'s own new position and the context and cultural
    tradition that is being carried on in one\'s own work are constituted
    performatively; that is, through one\'s own activity in the moment. I
    will discuss this phenomenon in greater depth below.

    To work with existing cultural material is, in itself, nothing new. In
    modern montages, artists likewise drew upon available texts, images, and
    treated materials. Yet there is an important difference: montages were
    concerned with bringing together seemingly incongruous but stable
    "finished pieces" in a more or less unmediated and fragmentary manner.
    This is especially clear in the collages by the Dadaists or in
    Expressionist literature such as Alfred Döblin\'s *Berlin
    Alexanderplatz*. In these works, the experience of Modernity\'s many
    fractures -- its fragmentation and turmoil -- was given a new aesthetic
    form. In his reference to montages, Adorno thus observed that the
    "negation of synthesis becomes a principle []{#Page_60 type="pagebreak"
    title="60"}of form."[^4^](#c2-note-0004){#c2-note-0004a} At least for a
    brief moment, he considered them an adequate expression for the
    impossibility of reconciling the contradictions of capitalist culture.
    Influenced by Adorno, the literary theorist Peter Bürger went so far as
    to call the montage the true "paradigm of
    modernity."[^5^](#c2-note-0005){#c2-note-0005a} In today\'s referential
    processes, on the contrary, pieces are not brought together as much as
    they are integrated into one another by being altered, adapted, and
    transformed. Unlike the older arrangement, it is not the fissures
    between elements that are foregrounded but rather their synthesis in the
    present. Conchita Wurst, the bearded diva, is not torn between two
    conflicting poles. Rather, she represents a successful synthesis --
    something new and harmonious that distinguishes itself by showcasing
    elements of the old order (man/woman) and simultaneously transcending
    them.

    This synthesis, however, is usually just temporary, for at any time it
    can itself serve as material for yet another rendering. Of course, this
    is far easier to pull off with digital objects than with analog objects,
    though these categories have become increasingly porous and thus
    increasingly problematic as opposites. More and more objects exist both
    in an analog and in a digital form. Think of photographs and slides,
    which have become so easy to digitalize. Even three-dimensional objects
    can now be scanned and printed. In the future, programmable materials
    with controllable and reversible features will cause the difference
    between the two domains to vanish: analog is becoming more and more
    digital.

    Montages and referential processes can only become widespread methods
    if, in a given society, cultural objects are available in three
    different respects. The first is economic and organizational: they must
    be affordable and easily accessible. Whoever is unable to afford books
    or get hold of them by some other means will not be able to reconfigure
    any texts. The second is cultural: working with cultural objects --
    which can always create deviations from the source in unpredictable ways
    -- must not be treated as taboo or illegal, but rather as an everyday
    activity without any special preconditions. It is much easier to
    manipulate a text from a secular newspaper than one from a religious
    canon. The third is material: it must be possible to use the material
    and to change it.[^6[]{#Page_61 type="pagebreak"
    title="61"}^](#c2-note-0006){#c2-note-0006a}

    In terms of this third form of availability, montages differ from
    referential processes, for cultural objects can be integrated into one
    another -- instead of simply being placed side by side -- far more
    readily when they are digitally coded. Information is digitally coded
    when it is stored by means of a limited system of discrete (that is,
    separated by finite intervals or distances) signs that are meaningless
    in themselves. This allows information to be copied from one carrier to
    another without any loss and it allows the respective signs, whether
    individually or in groups, to be arranged freely. Seen in this way,
    digital coding is not necessarily bound to computers but can rather be
    realized with all materials: a mosaic is a digital process in which
    information is coded by means of variously colored tiles, just as a
    digital image consists of pixels. In the case of the mosaic, of course,
    the resolution is far lower. Alphabetic writing is a form of coding
    linguistic information by means of discrete signs that are, in
    themselves, meaningless. Consequently, Florian Cramer has argued that
    "every form of literature that is recorded alphabetically and not based
    on analog parameters such as ideograms or orality is already digital in
    that it is stored in discrete
    signs."[^7^](#c2-note-0007){#c2-note-0007a} However, the specific
    features of the alphabet, as Marshall McLuhan repeatedly underscored,
    did not fully develop until the advent of the printing
    press.[^8^](#c2-note-0008){#c2-note-0008a} It was the printing press, in
    other words, that first abstracted written signs from analog handwriting
    and transformed them into standardized symbols that could be repeated
    without any loss of information. In this practical sense, the printing
    press made writing digital, with the result that dealing with texts soon
    became radically different.

    ::: {.section}
    ### Information overload 1.0 {#c2-sec-0003}

    The printing press made texts available in the three respects mentioned
    above. For one thing, their number increased rapidly, while their price
    significantly sank. During the first two generations after Gutenberg\'s
    invention -- that is, between 1450 and 1500 -- more books were produced
    than during the thousand years
    before.[^9^](#c2-note-0009){#c2-note-0009a} And that was just the
    beginning. Dealing with books and their content changed from the ground
    up. In manuscript culture, every new copy represented a potential
    degradation of the original, and therefore []{#Page_62 type="pagebreak"
    title="62"}the oldest sources (those that had undergone as little
    corruption as possible) were valued above all. With the advent of print
    culture, the idea took hold that texts could be improved by the process
    of editing, not least because the availability of old sources, through
    reprints and facsimiles, had also improved dramatically. Pure
    reproduction was mechanized and overcome as a cultural challenge.

    According to the historian Elizabeth Eisenstein, one of the first
    consequences of the greatly increased availability of the printed book
    was that it overcame the "tyranny of major authorities, which was common
    in small libraries."[^10^](#c2-note-0010){#c2-note-0010a} Scientists
    were now able to compare texts with one another and critique them to an
    unprecedented extent. Their general orientation turned around: instead
    of looking back in order to preserve what they knew, they were now
    looking ahead toward what they might not (yet) know.

    In order to organize this information flood of rapidly amassing texts,
    it was necessary to create new conventions: books were now specified by
    their author, publisher, and date of publication, not to mention
    furnished with page numbers. This enabled large numbers of texts to be
    catalogued and every individual text -- indeed, every single passage --
    to be referenced.[^11^](#c2-note-0011){#c2-note-0011a} Scientists could
    legitimize the pursuit of new knowledge by drawing attention to specific
    mistakes or gaps in existing texts. In the scientific culture that was
    developing at the time, the close connection between old and new
    ma­terial was not simply regarded as something positive; it was also
    urgently prescribed as a method of argumentation. Every text had to
    contain an internal system of references, and this was the basis for the
    development of schools, disciplines, and specific discourses.

    The digital character of printed writing also made texts available in
    the third respect mentioned above. Because discrete signs could be
    reproduced without any loss of information, it was possible not only to
    make perfect copies but also to remove content from one carrier and
    transfer it to another. Materials were no longer simply arranged
    sequentially, as in medieval compilations and almanacs, but manipulated
    to give rise to a new and independent fluid text. A set of conventions
    was developed -- one that remains in use today -- for modifying embedded
    or quoted material in order for it []{#Page_63 type="pagebreak"
    title="63"}to fit into its new environment. In this manner, quotations
    could be altered in such a way that they could be integrated seamlessly
    into a new text while remaining recognizable as direct citations.
    Several of these conventions, for instance the use of square brackets to
    indicate additions ("\[ \]") or ellipses to indicate omissions ("..."),
    are also used in this very book. At the same time, the conventions for
    making explicit references led to the creation of an internal reference
    system that made the singular position of the new text legible within a
    collective field of work. "Printing," to quote Elizabeth Eisenstein once
    again, "encouraged forms of combinatory activity which were social as
    well as intellectual. It changed relationships between men of learning
    as well as between systems of
    ideas."[^12^](#c2-note-0012){#c2-note-0012a} Exchange between scholars,
    in the form of letters and visits, intensified. The seventeenth century
    saw the formation of the *respublica literaria* or the "Republic of
    Letters," a loose network of scholars devoted to promoting the ideas of
    the Enlightenment. Beginning in the eighteenth century, the rapidly
    growing number of scientific fields was arranged and institutionalized
    into clearly distinct disciplines. In the nineteenth and twentieth
    centuries, diverse media-technical innovations made images, sounds, and
    moving images available, though at first only in analog formats. These
    created the preconditions that enabled the montage in all of its forms
    -- film cuts, collages, readymades, *musique concrète*, found-footage
    films, literary cut-ups, and artistic assemblages (to name only the
    best-known genres) -- to become the paradigm of Modernity.
    :::

    ::: {.section}
    ### Information overload 2.0 {#c2-sec-0004}

    It was not until new technical possibilities for recording, storing,
    processing, and reproduction appeared over the course of the 1990s that
    it also became increasingly possible to code and edit images, audio, and
    video digitally. Through the networking that was taking place not far
    behind, society was flooded with an unprecedented amount of digit­ally
    coded information *of every sort*, and the circulation of this
    information accelerated. This was not, however, simply a quantitative
    change but also and above all a qualitative one. Cultural materials
    became available in a comprehensive []{#Page_64 type="pagebreak"
    title="64"}sense -- economically and organizationally, culturally
    (despite legal problems), and materially (because digitalized). Today it
    would not be bold to predict that nearly every text, image, or sound
    will soon exist in a digital form. Most of the new reproducible works
    are already "born digital" and digit­ally distributed, or they are
    physically produced according to digital instructions. Many initiatives
    are working to digitalize older, analog works. We are now anchored in
    the digital.

    Among the numerous digitalization projects currently under way, the most
    ambitious is that of Google Books, which, since its launch in 2004, has
    digitalized around 20 million books from the collections of large
    libraries and prepared them for full-text searches. Right from the
    start, a fierce debate arose about the legal and cultural acceptability
    of this project. One concern was whether Google\'s process infringed
    upon the rights of the authors and publishers of the scanned books or
    whether, according to American law, it qualified as "fair use," in which
    case there would be no obligation for the company to seek authorization
    or offer compensation. The second main concern was whether it would be
    culturally or politically appropriate for a private corporation to hold
    a de facto monopoly over the digital heritage of book culture. The first
    issue incited a complex legal battle that, in 2013, was decided in
    Google\'s favor by a judge on the United States District Court in New
    York.[^13^](#c2-note-0013){#c2-note-0013a} At the heart of the second
    issue was the question of how a public library should look in the
    twenty-first century.[^14^](#c2-note-0014){#c2-note-0014a} In November
    of 2008, the European Commission and the cultural minister of the
    European Union launched the virtual Europeana library, which occurred
    after a number of European countries had already invested hundreds of
    millions of euros in various digitalization
    initiatives.[^15^](#c2-note-0015){#c2-note-0015a} Today, Europeana
    serves as a common access point to the online archives of around 2,500
    European cultural institutions. By the end of 2015, its digital holdings
    had grown to include more than 40 million objects. This is still,
    however, a relatively small number, for it has been estimated that
    European archives and museums contain more than 220 million
    natural-historical and more than 260 million cultural-historical
    objects. In the United States, discussions about the future of libraries
    []{#Page_65 type="pagebreak" title="65"}led to the 2013 launch of the
    Digital Public Library of America (DPLA), which, like Europeana,
    provides common access to the digitalized holdings of archives, museums,
    and libraries. By now, more than 14 million items can be viewed there.

    In one way or another, however, both the private and the public projects
    of this sort have been limited by binding copyright laws. The librarian
    and book historian Robert Darnton, one of the most prominent advocates
    of the Digital Public Library of America, has accordingly stated: "The
    main impediment to the DPLA\'s growth is legal, not financial. Copyright
    laws could exclude everything published after 1964, most works published
    after 1923, and some that go back as far as
    1873."[^16^](#c2-note-0016){#c2-note-0016a} The legal situation in
    Europe is similar to that in the United States. It, too, massively
    obstructs the work of public
    institutions.[^17^](#c2-note-0017){#c2-note-0017a} In many cases, this
    has had the absurd consequence that certain materials, though they have
    been fully digitalized, may only be accessed in part or exclusively
    inside the facilities of a particular institution. Whereas companies
    such as Google can afford to wage long legal battles, and in the
    meantime create precedents, public institutions must proceed with great
    caution, not least to avoid the accusation of using public funds to
    violate copyright laws. Thus, they tend to fade into the background and
    leave users, who are unfamiliar with the complex legal situation, with
    the impression that they are even more out-of-date than they often are.

    Informal actors, who explicitly operate beyond the realm of copyright
    law, are not faced with such restrictions. UbuWeb, for instance, which
    is the largest online archive devoted to the history of
    twentieth-century avant-garde art, was not created by an art museum but
    rather by the initiative of an individual artist, Kenneth Goldsmith.
    Since 1996, he has been collecting historically relevant materials that
    were no longer in distribution and placing them online for free and
    without any stipulations. He forgoes the process of obtaining the rights
    to certain works of art because, as he remarks on the website, "Let\'s
    face it, if we had to get permission from everyone on UbuWeb, there
    would be no UbuWeb."[^18^](#c2-note-0018){#c2-note-0018a} It would
    simply be too demanding to do so. Because he pursues the project without
    any financial interest and has saved so much []{#Page_66
    type="pagebreak" title="66"}from oblivion, his efforts have provoked
    hardly any legal difficulties. On the contrary, UbuWeb has become so
    important that Goldsmith has begun to receive more and more material
    directly from artists and their heirs, who would like certain works not
    to be forgotten. Nevertheless, or perhaps for this very reason,
    Goldsmith repeatedly stresses the instability of his archive, which
    could disappear at any moment if he loses interest in maintaining it or
    if something else happens. Users are therefore able to download works
    from UbuWeb and archive, on their own, whatever items they find most
    important. Of course, this fragility contradicts the idea of an archive
    as a place for long-term preservation. Yet such a task could only be
    undertaken by an institution that is oriented toward the long term.
    Because of the existing legal conditions, however, it is hardly likely
    that such an institution will come about.

    Whereas Goldsmith is highly adept at operating within a niche that not
    only tolerates but also accepts the violation of formal copyright
    claims, large websites responsible for the uncontrolled dissemination of
    digital content do not bother with such niceties. Their purpose is
    rather to ensure that all popular content is made available digitally
    and for free, whether legally or not. These sites, too, have experienced
    uninterrupted growth. By the end of 2015, dozens of millions of people
    were simultaneously using the BitTorrent tracker The Pirate Bay -- the
    largest nodal point for file-sharing networks during the last decade --
    to exchange several million digital files with one
    another.[^19^](#c2-note-0019){#c2-note-0019a} And this was happening
    despite protracted attempts to block or close down the file-sharing site
    by legal means and despite a variety of competing services. Even when
    the founders of the website were sentenced in Sweden to pay large fines
    (around €3 million) and to serve time in prison, the site still did not
    disappear from the internet.[^20^](#c2-note-0020){#c2-note-0020a} At the
    same time, new providers have entered the market of free access; their
    method is not to facilitate distributed downloads but rather to offer,
    on account of the drastically reduced cost of data transfers, direct
    streaming. Although some of these services are relatively easy to locate
    and some have been legally banned -- the best-known case in Germany
    being that of the popular site kino.to -- more of them continue to
    appear.[^21^](#c2-note-0021){#c2-note-0021a} Moreover, this phenomenon
    []{#Page_67 type="pagebreak" title="67"}is not limited to music and
    films, but encompasses all media formats. For instance, it is
    foreseeable that the number of freely available plans for 3D objects
    will increase along with the popularity of 3D printing. It has almost
    escaped notice, however, that so-called "shadow libraries" have been
    popping up everywhere; the latter are not accessible to the public but
    rather to members, for instance, of closed exchange platforms or of
    university intranets. Few seminars take place any more without a corpus
    of scanned texts, regardless of whether this practice is legal or
    not.[^22^](#c2-note-0022){#c2-note-0022a}

    The lines between these different mechanisms of access are highly
    permeable. Content acquired legally can make its way to file-sharing
    networks as an illegal copy; content available for free can be sold in
    special editions; content from shadow libraries can make its way to
    publicly accessible sites; and, conversely, content that was once freely
    available can disappear into shadow libraries. As regards free access,
    the details of this rapidly changing landscape are almost
    inconsequential, for the general trend that has emerged from these
    various dynamics -- legal and illegal, public and private -- is
    unambiguous: in a comprehensive and practical sense, cultural works of
    all sorts will become freely available despite whatever legal and
    technical restrictions might be in place. Whether absolutely all
    material will be made available in this way is not the decisive factor,
    at least not for the individual, for, as the German Library Association
    has stated, "it is foreseeable that non-digitalized material will
    increasingly escape the awareness of users, who have understandably come
    to appreciate the ubiquitous availability and more convenient
    processability of the digital versions of analog
    objects."[^23^](#c2-note-0023){#c2-note-0023a} In this context of excess
    information, it is difficult to determine whether a particular work or a
    crucial reference is missing, given that a multitude of other works and
    references can be found in their place.

    At the same time, prodigious amounts of new material are being produced
    that, before the era of digitalization and networks, never could have
    existed at all or never would have left the private sphere. An example
    of this is amateur photography. This is nothing new in itself; as early
    as 1899, Kodak was marketing its films and apparatus with the slogan
    "You press the button, we do the rest," and ever since, []{#Page_68
    type="pagebreak" title="68"}drawers and albums have been overflowing
    with photographs. With the advent of digitalization, however, certain
    economic and material limitations ceased to exist that, until then, had
    caused most private photographers to think twice about how many shots
    they wanted to take. After all, they had to pay for the film to be
    developed and then store the pictures somewhere. Cameras also became
    increasingly "intelligent," which improved the technical quality of
    photo­graphs. Even complex procedures such as increasing the level of
    detail or the contrast ratio -- the difference between an image\'s
    brightest and darkest points -- no longer require any specialized
    knowledge of photochemical processes in the darkroom. Today, such
    features are often pre-installed in many cameras as an option (high
    dynamic range). Ever since the introduction of built-in digital cameras
    for smartphones, anyone with such a device can take pictures everywhere
    and at any time and then store them digitally. Images can then be posted
    on online platforms and shared with others. By the middle of 2015,
    Flickr -- the largest but certainly not the only specialized platform of
    this sort -- had more than 112 million registered users participating in
    more than 2 million groups. Every user has access to free storage space
    for about half a million of his or her own pictures. At that point, in
    other words, the platform was equipped to manage more than 55 billion
    photographs. Around 3.5 million images were being uploaded every day,
    many of which could be accessed by anyone. This may seem like a lot, but
    in reality it is just a small portion of the pictures that are posted
    online on a daily basis. Around that same time -- again, the middle of
    2015 -- approximately 350 million pictures were being posted on Facebook
    *every day*. The total number of photographs saved there has been
    estimated to be 250 billion. In addition, there are also large platforms
    for professional "stock photos" (supplies of pre-produced images that
    are supposed to depict generic situations) and the databanks of
    professional agencies such Getty Images or Corbis. All of these images
    can be found easily and acquired quickly (though not always for free).
    Yet photography is not unique in this regard. In all fields, the number
    of cultural artifacts available to the public on specialized platforms
    has been increasing rapidly in recent years.[]{#Page_69 type="pagebreak"
    title="69"}
    :::

    ::: {.section}
    ### The great disorder {#c2-sec-0005}

    The old orders that had been responsible for filtering, organ­izing, and
    publishing cultural material -- culture industries, mass media,
    libraries, museums, archives, etc. -- are incapable of managing almost
    any aspect of this deluge. They can barely function as gatekeepers any
    more between those realms that, with their help, were once defined as
    "private" and "public." Their decisions about what is or is not
    important matter less and less. Moreover, having already been subjected
    to a decades-long critique, their rules, which had been relatively
    binding and formative over long periods of time, are rapidly losing
    practical significance.

    Even Europeana, a relatively small project based on trad­itional museums
    and archives and with a mandate to make the European cultural heritage
    available online, has contributed to the disintegration of established
    orders: it indiscriminately brings together 2,500 previously separated
    institutions. The specific semantic contexts that formerly shaped the
    history and orientation of institutions have been dissolved or reduced
    to dry meta-data, and millions upon millions of cultural artifacts are
    now equidistant from one another. Instead of certain artifacts being
    firmly anchored in a location, for instance in an ethnographic
    collection devoted to the colonial history of France, it is now possible
    for everything to exist side by side. Europeana is not an archive in the
    traditional sense, or even a museum with a fixed and meaningful order;
    rather, it is just a standard database. Everything in it is just one
    search request away, and every search generates a unique order in the
    form of a sequence of visible artifacts. As a result, individual objects
    are freed from those meta-narratives, created by the museums and
    archives that preserve them, which situate them within broader contexts
    and assign more or less clear meanings to them. They consequently become
    more open to interpretation. A search result does not articulate an
    interpretive field of reference but merely a connection, created by
    constantly changing search algorithms, between a request and the corpus
    of material, which is likewise constantly changing.

    Precisely because it offers so many different approaches to more or less
    freely combinable elements of information, []{#Page_70 type="pagebreak"
    title="70"}the order of the database no longer really provides a
    framework for interpreting search results in a meaningful way.
    Al­together, the meaning of many objects and signs is becoming even more
    uncertain. On the one hand, this is because the connection to their
    original context is becoming fragile; on the other hand, it is because
    they can appear in every possible combination and in the greatest
    variety of reception contexts. In less official archives and in less
    specialized search engines, the dissolution of context is far more
    pronounced than it is in the case of the Europeana project. For the sake
    of orienting its users, for instance, YouTube provides the date when a
    video has been posted, but there is no indication of when a video was
    actually produced. Further information provided about a video, for
    example in the comments section, is essentially unreliable. It might be
    true -- or it might not. The internet researcher David Weinberger has
    called this the "new digital disorder," which, at least for many users,
    is an entirely apt description.[^24^](#c2-note-0024){#c2-note-0024a} For
    individuals, this disorder has created both the freedom to establish
    their own orders and the obligation of doing so, regardless of whether
    or not they are ready for the task.

    This tension between freedom and obligation is at its strongest online,
    where the excess of culture and its more or less free availability are
    immediate and omnipresent. In fact, everything that can be retrieved
    online is culture in the sense that everything -- from the deepest layer
    of hardware to the most superficial tweet -- has been made by someone
    with a particular intention, and everything has been made to fit a
    particular order. And it is precisely this excess of often contradictory
    meanings and limited, regional, and incompatible orders that leads to
    disorder and meaninglessness. This is not limited to the online world,
    however, because the latter is not self-contained. In an essential way,
    digital media also serve to organize the material world. On the basis of
    extremely complex and opaque yet highly efficient logistical and
    production processes, people are also confronted with constantly
    changing material things about whose origins and meanings they have
    little idea. Even something as simple to produce as yoghurt usually has
    a thousand kilometers behind it before it ends up on a shelf in the
    supermarket. The logistics that enable this are oriented toward
    flexibility; []{#Page_71 type="pagebreak" title="71"}they bring elements
    together as efficiently as possible. It is nearly impossible for final
    customers to find out anything about the ingredients. Customers are
    merely supposed to be oriented by signs and notices such as "new" or "as
    before," "natural," and "healthy," which are written by specialists and
    meant to manipulate shoppers as much as the law allows. Even here, in
    corporeal everyday life, every individual has to deal with a surge of
    excess and disorder that threatens to erode the original meaning
    conferred on every object -- even where such meaning was once entirely
    unproblematic, as in the case of
    yoghurt.[^25^](#c2-note-0025){#c2-note-0025a}
    :::

    ::: {.section}
    ### Selecting and organizing {#c2-sec-0006}

    In this situation, the creation of one\'s own system of references has
    become a ubiquitous and generally accessible method for organizing all
    of the ambivalent things that one encounters on a given day. Such things
    are thus arranged within a specific context of meaning that also
    (co)determines one\'s own relation to the world and subjective position
    in it. Referentiality takes place through three types of activity, the
    first being simply to attract attention to certain things, which affirms
    (at least implicitly) that they are important. With every single picture
    posted on Flickr, every tweet, every blog post, every forum post, and
    every status update, the user is doing exactly that; he or she is
    communicating to others: "Look over here! I think this is important!" Of
    course, there is nothing new to filtering and allocating meaning. What
    is new, however, is that these processes are no longer being carried out
    primarily by specialists at editorial offices, museums, or archives, but
    have become daily requirements for a large portion of the population,
    regardless of whether they possess the material and cultural resources
    that are necessary for the task.
    :::

    ::: {.section}
    ### The loop through the body {#c2-sec-0007}

    Given the flood of information that perpetually surrounds everyone, the
    act of focusing attention and reducing vast numbers of possibilities
    into something concrete has become a productive achievement, however
    banal each of these micro-activities might seem on its own, and even if,
    at first, []{#Page_72 type="pagebreak" title="72"}the only concern might
    be to focus the attention of the person doing it. The value of this
    (often very brief) activity is that it singles out elements from the
    uniform sludge of unmanageable complexity. Something plucked out in this
    way gains value because it has required the use of a resource that
    cannot be reproduced, that exists outside of the world of information
    and that is invariably limited for every individual: our own lifetime.
    Every status update that is not machine-generated means that someone has
    invested time, be it only a second, in order to point to this and not to
    something else. Thus, a process of validating what exists in the excess
    takes place in connection with the ultimate scarcity -- our own
    lifetimes, our own bodies. Even if the value generated by this act is
    minimal or diffuse, it is still -- to borrow from Gregory Bateson\'s
    famous definition of information -- a difference that makes a difference
    in this stream of equivalencies and
    meaninglessness.[^26^](#c2-note-0026){#c2-note-0026a} This singling out
    -- this use of one\'s own body to generate meaning -- does not, however,
    take place by means of mere micro-activities throughout the day; it is
    also a defining aspect of complex cultural strategies. In recent years,
    re-enactment (that is, the re-staging of historical situ­ations and
    events) has established itself as a common practice in contemporary art.
    Unlike traditional re-enactments, such as those of historically
    significant battles, which attempt to represent the past as faithfully
    as possible, "artistic re-enactments," according to the curator Inke
    Arns, "are not an affirmative confirmation of the past; rather, they are
    *questionings* of the present through reaching back to historical
    events," especially as they are represented in images and other forms of
    documentation. Thanks to search engines and databases, such
    representations are more or less always present, though in the form of
    indeterminate images, ambivalent documents, and contentious
    interpretations. Artists in this situation, as Arns explains,

    ::: {.extract}
    do not ask the naïve question about what really happened outside of the
    history represented in the media -- the "authenticity" beyond the images
    -- instead, they ask what the images we see might mean concretely to us,
    if we were to experience these situations personally. In this way the
    artistic reenactment confronts the general feeling of insecurity about
    the meaning []{#Page_73 type="pagebreak" title="73"}of images by using a
    paradoxical approach: through erasing distance to the images and at the
    same time distancing itself from the
    images.[^27^](#c2-note-0027){#c2-note-0027a}
    :::

    This paradox manifests itself in that the images are appropriated and
    sublated through the use of one\'s own body in the re-enactments. They
    simultaneously refer to the past and create a new reality in the
    present. In perhaps the best-known re-enactment of this type, the artist
    Jeremy Deller revived, in 2001, the Battle of Orgreave, one of the
    central episodes of the British miners\' strike of 1984 and 1985. This
    historical event is regarded as a turning point in the protracted
    conflict between Margaret Thatcher\'s government and the labor unions --
    a key moment in the implementation of Great Britain\'s neoliberal
    regime, which is still in effect today. In Deller\'s re-enactment, the
    heart of the matter is not historical accuracy, which is always
    controversial in such epoch-changing events. Rather, he focuses on the
    former participants -- the miners and police officers alike, who, along
    with non-professional actors, lived through the situation again -- in
    order to explore both the distance from the events and their
    representation in the media, as well as their ongoing biographical and
    societal presence.[^28^](#c2-note-0028){#c2-note-0028a}

    Elaborate practices of embodying medial images through processes of
    appropriation and distancing have also found their way into popular
    culture, for instance in so-called "cosplay." The term, which is a
    contraction of the words "costume" and "play," was coined by a Japanese
    man named Nobuyuki Takahashi. In 1984, while attending the World Science
    Fiction Convention in Los Angeles, he used the word to describe the
    practice of certain attendees to dress up as their favorite characters.
    Participants in cosplay embody fictitious figures -- mostly from the
    worlds of science fiction, comics/manga, or computer games -- by donning
    home-made costumes and striking characteristic
    poses.[^29^](#c2-note-0029){#c2-note-0029a} The often considerable
    effort that goes into this is mostly reflected in the costumes, not in
    the choreography or dramaturgy of the performance. What is significant
    is that these costumes are usually not exact replicas but are rather
    freely adapted by each player to represent the character as he or she
    interprets it to be. Accordingly, "Cosplay is a form of appropriation
    []{#Page_74 type="pagebreak" title="74"}that transforms, actualizes and
    performs an existing story in close connection to the fan\'s own
    identity."[^30^](#c2-note-0030){#c2-note-0030a} This practice,
    admittedly, goes back quite far in the history of fan culture, but it
    has experienced a striking surge through the opportunity for fans to
    network with one another around the world, to produce costumes and
    images of professional quality, and to place themselves on the same
    level as their (fictitious) idols. By now it has become a global
    subculture whose members are active not only online but also at hundreds
    of conventions throughout the world. In Germany, an annual cosplay
    competition has been held since 2007 (it is organized by the Frankfurt
    Book Fair and Animexx, the country\'s largest manga and anime
    community). The scene, which has grown and branched out considerably
    over the past few years, has slowly begun to professionalize, with
    shops, books, and players who make paid appearances. Even in fan
    culture, stars are born. As soon as the subculture has exceeded a
    certain size, this gradual onset of commercialization will undoubtedly
    lead to tensions within the community. For now, however, two of its
    noteworthy features remain: the power of the desire to appropriate, in a
    bodily manner, characters from vast cultural universes, and the
    widespread combination of free interpretation and meticulous attention
    to detail.
    :::

    ::: {.section}
    ### Lineages and transformations {#c2-sec-0008}

    Because of the great effort tha they require, re-enactment and cosplay
    are somewhat extreme examples of singling out, appropriating, and
    referencing. As everyday activities that almost take place incidentally,
    however, these three practices usually do not make any significant or
    lasting differences. Yet they do not happen just once, but over and over
    again. They accumulate and thus constitute referentiality\'s second type
    of activity: the creation of connections between the many things that
    have attracted attention. In such a way, paths are forged through the
    vast complexity. These paths, which can be formed, for instance, by
    referring to different things one after another, likewise serve to
    produce and filter meaning. Things that can potentially belong in
    multiple contexts are brought into a single, specific context. For the
    individual []{#Page_75 type="pagebreak" title="75"}producer, this is how
    fields of attention, reference systems, and contexts of meaning are
    first established. In the third step, the things that have been selected
    and brought together are changed. Perhaps something is removed to modify
    the meaning, or perhaps something is added that was previously absent or
    unavailable. Either way, referential culture is always producing
    something new.

    These processes are applied both within individual works (referentiality
    in a strict sense) and within currents of communication that consist of
    numerous molecular acts (referentiality in a broader sense). This latter
    sort of compilation is far more widespread than the creation of new
    re-mix works. Consider, for example, the billionfold sequences of status
    updates, which sometimes involve a link to an interesting video,
    sometimes a post of a photograph, then a short list of favorite songs, a
    top 10 chart from one\'s own feed, or anything else. Such methods of
    inscribing oneself into the world by means of references, combinations,
    or alterations are used to create meaning through one\'s own activity in
    the world and to constitute oneself in it, both for one\'s self and for
    others. In a culture that manifests itself to a great extent through
    mediatized communication, people have to constitute themselves through
    such acts, if only by posting
    "selfies."[^31^](#c2-note-0031){#c2-note-0031a} Not to do so would be to
    risk invisibility and being forgotten.

    On this basis, a genuine digital folk culture of re-mixing and mashups
    has formed in recent years on online platforms, in game worlds, but also
    through cultural-economic productions of individual pieces or short
    series. It is generated and maintained by innumerable people with
    varying degrees of intensity and ambition. Its common feature with
    trad­itional folk culture, in choirs or elsewhere, is that production
    and reception (but also reproduction and creation) largely coincide.
    Active participation admittedly requires a certain degree of
    proficiency, interest, and engagement, but usually not any extraordinary
    talent. Many classical institutions such as museums and archives have
    been attempting to take part in this folk culture by setting up their
    own re-mix services. They know that the "public" is no longer able or
    willing to limit its engagement with works of art and cultural history
    to one of quiet contemplation. At the end of 2013, even []{#Page_76
    type="pagebreak" title="76"}the Deutsches Symphonie-Orchester Berlin
    initiated a re-mix competition. A year earlier, the Rijksmuseum in
    Amsterdam launched so-called "Rijksstudios." Since then, the museum has
    made available on its website more than 200,000 high-resolution images
    from its collection. Users are free to use these to create their own
    re-mixes online and share them with others. Interestingly, the
    Rijksmuseum does not distinguish between the work involved in
    transforming existing pieces and that involved in curating its own
    online gallery.

    Referential processes have no beginning and no end. Any material that is
    used to make something new has a pre-history of its own, even if its
    traces are lost in clouds of uncertainty. Upon closer inspection, this
    cloud might clear a little bit, but it is extremely uncommon for a
    genuine beginning -- a *creatio ex nihilo* -- to be revealed. This
    raises the question of whether there can really be something like
    originality in the emphatic sense.[^32^](#c2-note-0032){#c2-note-0032a}
    Regardless of the answer to this question, the fact that by now many
    people select, combine, and alter objects on a daily basis has led to a
    slow shift in our perception and sensibilities. In light of the
    experiences that so many people are creating, the formerly exotic
    theories of deconstruction suddenly seem anything but outlandish. Nearly
    half a century ago, Roland Barthes defined the text as a fabric of
    quotations, and this incited vehement
    opposition.[^33^](#c2-note-0033){#c2-note-0033a} "But of course," one
    would be inclined to say today, "that can be statistically proven
    through software analysis!" Amazon identifies books by means of their
    "statistically improbable phrases"; that is, by means of textual
    elements that are highly unlikely to occur elsewhere. This implies, of
    course, that books contain many textual elements that are highly likely
    to be found in other texts, without suggesting that such elements would
    have to be regarded as plagiarism.

    In the Gutenberg Galaxy, with its fixation on writing, the earliest
    textual document is usually understood to represent a beginning. If no
    references to anything before can be identified, the text is then
    interpreted as a closed entity, as a new text. Thus, fairy tales and
    sagas, which are typical elements of oral culture, are still more
    strongly associated with the names of those who recorded them than with
    the names of those who narrated them. This does not seem very convincing
    today. In recent years, literary historians have made strong []{#Page_77
    type="pagebreak" title="77"}efforts to shift the focus of attention to
    the people (mostly women) who actually told certain fairy tales. In
    doing so, they have been able to work out to what extent the respective
    narrators gave shape to specific stories, which were written down as
    common versions, and to what extent these stories reflect their
    narrators\' personal histories.[^34^](#c2-note-0034){#c2-note-0034a}

    Today, after more than 40 years of deconstructionist theory and a change
    in our everyday practices, it is no longer controversial to read works
    -- even by canonical figures like Wagner or Mozart -- in such a way as
    to highlight the other works, either by the artists in question or by
    other artists, that are contained within
    them.[^35^](#c2-note-0035){#c2-note-0035a} This is not an expression of
    decreased appreciation but rather an indication that, as Zygmunt Bauman
    has stressed, "The way human beings understand the world tends to be at
    all times *praxeomorphic*: it is always shaped by the know-how of the
    day, by what people can do and how they usually go about doing
    it."[^36^](#c2-note-0036){#c2-note-0036a} And the everyday practice of
    today is one of singling out, bringing together, altering, and adding.
    Accordingly, not only has our view of current cultural production
    shifted; our view of cultural history has shifted as well. As always,
    the past is made to suit the sensibilities of the present.

    As a rule, however, things that have no beginning also have no end. This
    is not only because they can in turn serve as elements for other new
    contexts of meaning, but also because the attention paid to the context
    in which they take on specific meaning is sensitive to the work that has
    to be done to maintain the context itself. Even timelessness is an
    elaborate everyday business. The attempt to rescue works of art from the
    ravages of time -- to preserve them forever -- means that they regularly
    need to be restored. Every restoration inevit­ably stirs a debate about
    whether the planned interventions are appropriate and about how to deal
    with the traces of previous interventions, which, from the current
    perspective, often seem to be highly problematic. Whereas, just a
    generation ago, preservationists ensured that such interventions
    remained visible (as articulations of the historical fissures that are
    typical of Modernity), today greater emphasis is placed on reducing
    their visibility and re-creating the illusion of an "original condition"
    (without, however, impeding any new functionality that a piece might
    have in the present). []{#Page_78 type="pagebreak" title="78"}The
    historically faithful restoration of the Berlin City Palace, and yet its
    repurposed function as a museum and meeting place, are typical of this
    new attitude in dealing with our historical heritage.

    In everyday activity, too, the never-ending necessity of this work can
    be felt at all times. Here the issue is not timelessness, but rather
    that the established contexts of meaning quickly become obsolete and
    therefore have to be continuously affirmed, expanded, and changed in
    order to maintain the relevance of the field that they define. This
    lends referentiality a performative character that combines productive
    and reproductive dimensions. That which is not constantly used and
    renewed simply disappears. Often, however, this only means that it will
    sink into an endless archive and become unrealized potential until
    someone reactivates it, breathes new life into it, rouses it from its
    slumber, and incorporates it into a newly relevant context of meaning.
    "To be relevant," according to the artist Eran Schaerf, "things must be
    recyclable."[^37^](#c2-note-0037){#c2-note-0037a}

    Alone, everyone is overwhelmed by the task of having to generate meaning
    against this backdrop of all-encompassing meaninglessness. First, the
    challenge is too great for any individual to overcome; second, meaning
    itself is only created intersubjectively. While it can admittedly be
    asserted by a single person, others have to confirm it before it can
    become a part of culture. For this reason, the actual subject of
    cultural production under the digital condition is not the individual
    but rather the next-largest unit.
    :::
    :::

    ::: {.section}
    Communality {#c2-sec-0009}
    -----------

    As an individual, it is impossible to orient oneself within a complex
    environment. Meaning -- as well as the ability to act -- can only be
    created, reinforced, and altered in exchange with others. This is
    nothing noteworthy; biologically and culturally, people are social
    beings. What has changed historically is how people are integrated into
    larger contexts, how processes of exchange are organized, and what every
    individual is expected to do in order to become a fully fledged
    participant in these processes. For nearly 50 years, traditional
    []{#Page_79 type="pagebreak" title="79"}institutions -- that is,
    hierarchically and bureaucratically organ­ized civic institutions such
    as established churches, labor unions, and political parties -- have
    continuously been losing members.[^38^](#c2-note-0038){#c2-note-0038a}
    In tandem with this, the overall commitment to the identities, family
    values, and lifestyles promoted by these institutions has likewise been
    in decline. The great mech­anisms of socialization from the late stages
    of the Gutenberg Galaxy have been losing more and more of their
    influence, though at different speeds and to different extents. All
    told, however, explicitly and collectively normative impulses are
    decreasing, while others (implicitly economic, above all) are on the
    rise. According to mainstream sociology, a cause or consequence of this
    is the individualization and atomization of society. As early as the
    middle of the 1980s, Ulrich Beck claimed: "In the individualized society
    the individual must therefore learn, on pain of permanent disadvantage,
    to conceive of himself or herself as the center of action, as the
    planning office with respect to his/her own biography, abil­ities,
    orientations, relationships and so
    on."[^39^](#c2-note-0039){#c2-note-0039a} Over the past three decades,
    the dominant neoliberal political orientation, with its strong stress on
    the freedom of the individual -- to realize oneself as an individual
    actor in the allegedly open market and in opposition to allegedly
    domineering collective mechanisms -- has radicalized these tendencies
    even further. The ability to act, however, is not only a question of
    one\'s personal attitude but also of material resources. And it is this
    same neoliberal politics that deprives so many people of the resources
    needed to take advantage of these new freedoms in their own lives. As a
    result they suffer, in Ulrich Beck\'s terms, "permanent disadvantage."

    Under the digital condition, this process has permeated the finest
    structures of social life. Individualization, commercialization, and the
    production of differences (through design, for instance) are ubiquitous.
    Established civic institutions are not alone in being hollowed out;
    relatively new collectives are also becoming more differentiated, a
    development that I outlined above with reference to the transformation
    of the gay movement into the LGBT community. Yet nevertheless, or
    perhaps for this very reason, new forms of communality are being formed
    in these offshoots -- in the small activities of everyday life. And
    these new communal formations -- rather []{#Page_80 type="pagebreak"
    title="80"}than individual people -- are the actual subjects who create
    the shared meaning that we call culture.

    ::: {.section}
    ### The problem of the "community" {#c2-sec-0010}

    I have chosen the rather cumbersome expression "communal formation" in
    order to avoid the term "community" (*Gemeinschaft*), although the
    latter is used increasingly often in discussions of digital cultures and
    has played an import­ant role, from the beginning, in conceptions of
    networking. Viewed analytically, however, "community" is a problematic
    term because it is almost hopelessly overloaded. Particularly in the
    German-speaking tradition, Ferdinand Tönnies\'s polar distinction
    between "community" (*Gemeinschaft*) and "society" (*Gesellschaft*),
    which he introduced in 1887, remains
    influential.[^40^](#c2-note-0040){#c2-note-0040a} Tönnies contrasted two
    fundamentally different and exclusive types of social relations. Whereas
    community is characterized by the overlapping multidimensional nature of
    social relationships, society is defined by the functional separation of
    its sectors and spheres. Community embeds every individual into complex
    social relationships, all of which tend to be simultaneously present. In
    the traditional village community ("communities of place," in Tönnies\'s
    terms), neighbors are involved with one another, for better or for
    worse, both on a familiar basis and economically or religiously. Every
    activity takes place on several different levels at the same time.
    Communities are comprehensive social institutions that penetrate all
    areas of life, endowing them with meaning. Through mutual dependency,
    they create stability and security, but they also obstruct change and
    hinder social mobility. Because everyone is connected with each other,
    no can leave his or her place without calling into question the
    arrangement as a whole. Communities are thus structurally conservative.
    Because every human activity is embedded in multifaceted social
    relationships, every change requires adjustments across the entire
    interrelational web -- a task that is not easy to accomplish.
    Accordingly, the trad­itional communities of the eighteenth and
    nineteenth centuries fiercely opposed the establishment of capitalist
    society. In order to impose the latter, the old community structures
    were broken apart with considerable violence. This is what Marx
    []{#Page_81 type="pagebreak" title="81"}and Engels were referring to in
    that famous passage from *The Communist Manifesto*: "All the settled,
    age-old relations with their train of time-honoured preconceptions and
    viewpoints are dissolved. \[...\] Everything feudal and fixed goes up in
    smoke, everything sacred is
    profaned."[^41^](#c2-note-0041){#c2-note-0041a}

    The defining feature of society, on the contrary, is that it frees the
    individual from such multifarious relationships. Society, according to
    Tönnies, separates its members from one another. Although they
    coordinate their activity with others, they do so in order to pursue
    partial, short-term, and personal goals. Not only are people separated,
    but so too are different areas of life. In a market-oriented society,
    for instance, the economy is conceptualized as an independent sphere. It
    can therefore break away from social connections to be organized simply
    by limited formal or legal obligations between actors who, beyond these
    obligations, have nothing else to do with one another. Costs or benefits
    that inadvertently affect people who are uninvolved in a given market
    transaction are referred to by economists as "externalities," and market
    participants do not need to care about these because they are strictly
    pursuing their own private interests. One of the consequences of this
    form of social relationship is a heightened social dynamic, for now it
    is possible to introduce changes into one area of life without
    considering its effects on other areas. In the end, the dissolution of
    mutual obligations, increased uncertainty, and the reduction of many
    social connections go hand in hand with what Marx and Engels referred to
    in *The Communist Manifesto* as "unfeeling hard cash."

    From this perspective, the historical development looks like an
    ambivalent process of modernization in which society (dynamic, but cold)
    is erected over the ruins of community (static, but warm). This is an
    unusual combination of romanticism and progress-oriented thinking, and
    the problems with this influential perspective are numerous. There is,
    first, the matter of its dichotomy; that is, its assumption that there
    can only be these two types of arrangement, community and society. Or
    there is the notion that the one form can be completely ousted by the
    other, even though aspects of community and aspects of society exist at
    the same time in specific historical situations, be it in harmony or in
    conflict.[^42^](#c2-note-0042){#c2-note-0042a} []{#Page_82
    type="pagebreak" title="82"}These impressions, however, which are so
    firmly associated with the German concept of *Gemeinschaft*, make it
    rather difficult to comprehend the new forms of communality that have
    developed in the offshoots of networked life. This is because, at least
    for now, these latter forms do not represent a genuine alternative to
    societal types of social
    connectedness.[^43^](#c2-note-0043){#c2-note-0043a} The English word
    "community" is somewhat more open. The opposition between community and
    society resonates with it as well, although the dichotomy is not as
    clear-cut. American communitarianism, for instance, considers the
    difference between community and society to be gradual and not
    categorical. Its primary aim is to strengthen civic institutions and
    mechanisms, and it regards community as an intermediary level between
    the individual and society.[^44^](#c2-note-0044){#c2-note-0044a} But
    there is a related English term, which seems even more productive for my
    purposes, namely "community of practice," a concept that is more firmly
    grounded in the empirical observation of concrete social relationships.
    The term was introduced at the beginning of the 1990s by the social
    researchers Jean Lave and Étienne Wenger. They observed that, in most
    cases, professional learning (for instance, in their case study of
    midwives) does not take place as a one-sided transfer of knowledge or
    proficiency, but rather as an open exchange, often outside of the formal
    learning environment, between people with different levels of knowledge
    and experience. In this sense, learning is an activity that, though
    distinguishable, cannot easily be separated from other "normal"
    activities of everyday life. As Lave and Wenger stress, however, the
    community of practice is not only a social space of exchange; it is
    rather, and much more fundamentally, "an intrinsic condition for the
    existence of knowledge, not least because it provides the interpretive
    support necessary for making sense of its
    heritage."[^45^](#c2-note-0045){#c2-note-0045a} Communities of practice
    are thus always epistemic communities that form around certain ways of
    looking at the world and one\'s own activity in it. What constitutes a
    community of practice is thus the joint acquisition, development, and
    preservation of a specific field of practice that contains abstract
    knowledge, concrete proficiencies, the necessary material and social
    resources, guidelines, expectations, and room to interpret one\'s own
    activity. All members are active participants in the constitution of
    this field, and this reinforces the stress on []{#Page_83
    type="pagebreak" title="83"}practice. Each of them, however, brings
    along different presuppositions and experiences, for their situations
    are embedded within numerous and specific situations of life or work.
    The processes within the community are mostly informal, and yet they are
    thoroughly structured, for authority is distributed unequally and is
    based on the extent to which the members value each other\'s (and their
    own) levels of knowledge and experience. At first glance, then, the term
    "community of practice" seems apt to describe the meaning-generating
    communal formations that are at issue here. It is also somewhat
    problematic, however, because, having since been subordinated to
    management strategies, its use is now narrowly applied to professional
    learning and managing knowledge.[^46^](#c2-note-0046){#c2-note-0046a}

    From these various notions of community, it is possible to develop the
    following way of looking at new types of communality: they are formed in
    a field of practice, characterized by informal yet structured exchange,
    focused on the generation of new ways of knowing and acting, and
    maintained through the reflexive interpretation of their own activity.
    This last point in particular -- the communal creation, preservation,
    and alteration of the interpretive framework in which actions,
    processes, and objects acquire a firm meaning and connection -- can be
    seen as the central role of communal formations.

    Communication is especially significant to them. Indi­viduals must
    continuously communicate in order to constitute themselves within the
    fields and practices, or else they will remain invisible. The mass of
    tweets, updates, emails, blogs, shared pictures, texts, posts on
    collaborative platforms, and databases (etc.) that are necessary for
    this can only be produced and processed by means of digital
    technologies. In this act of incessant communication, which is a
    constitutive element of social existence, the personal desire for
    self-constitution and orientation becomes enmeshed with the outward
    pressure of having to be present and available to form a new and binding
    set of requirements. This relation between inward motivation and outward
    pressure can vary highly, depending on the character of the communal
    formation and the position of the individual within it (although it is
    not the individual who determines what successful communication is, what
    represents a contribution to the communal formation, or in which form
    one has to be present). []{#Page_84 type="pagebreak" title="84"}Such
    decisions are made by other members of the formation in the form of
    positive or negative feedback (or none at all), and they are made with
    recourse to the interpretive framework that has been developed in
    common. These communal and continuous acts of learning, practicing, and
    orientation -- the exchange, that is, between "novices" and "experts" on
    the same field, be it concerned with internet politics, illegal street
    racing, extreme right-wing music, body modification, or a free
    encyclopedia -- serve to maintain the framework of shared meaning,
    expand the constituted field, recruit new members, and adapt the
    framework of interpretation and activity to changing conditions. Such
    communal formations constitute themselves; they preserve and modify
    themselves by constantly working out the foundations of their
    constitution. This may sound circular, for the process of reflexive
    self-constitution -- "autopoiesis" in the language of systems theory --
    is circular in the sense that control is maintained through continuous,
    self-generating feedback. Self-referentiality is a structural feature of
    these formations.
    :::

    ::: {.section}
    ### Singularity and communality {#c2-sec-0011}

    The new communal formations are informal forms of organ­ization that are
    based on voluntary action. No one is born into them, and no one
    possesses the authority to force anyone else to join or remain against
    his or her will, or to assign anyone with tasks that he or she might be
    unwilling to do. Such a formation is not an enclosed disciplinary
    institution in Foucault\'s sense,[^47^](#c2-note-0047){#c2-note-0047a}
    and, within it, power is not exercised through commands, as in the
    classical sense formulated by Max
    Weber.[^48^](#c2-note-0048){#c2-note-0048a} The condition of not being
    locked up and not being subordinated can, at least at first, represent
    for the individual a gain in freedom. Under a given set of conditions,
    everyone can (and must) choose which formations to participate in, and
    he or she, in doing so, will have a better or worse chance to influence
    the communal field of reference.

    On the everyday level of communicative self-constitution and creating a
    personal cognitive horizon -- in innumerable streams, updates, and
    timelines on social mass media -- the most important resource is the
    attention of others; that is, their feedback and the mutual recognition
    that results from it. []{#Page_85 type="pagebreak" title="85"}And this
    recognition may simply be in the form of a quickly clicked "like," which
    is the smallest unit that can assure the sender that, somewhere out
    there, there is a receiver. Without the latter, communication has no
    meaning. The situation is somewhat menacing if no one clicks the "like"
    button beneath a post or a photo. It is a sign that communication has
    broken, and the result is the dissolution of one\'s own communicatively
    constituted social existence. In this context, the boundaries are
    blurred between the categories of information, communication, and
    activity. Making information available always involves the active --
    that is, communicating -- person, and not only in the case of ubiquitous
    selfies, for in an overwhelming and chaotic environment, as discussed
    above, selection itself is of such central importance that the
    differences between the selected and the selecting become fluid,
    particularly when the goal of the latter is to experience confirmation
    from others. In this back-and-forth between one\'s own presence and the
    validation of others, one\'s own motives and those of the community are
    not in opposition but rather mutually depend on one another. Condensed
    to simple norms and to a basic set of guidelines within the context of
    an image-oriented social mass media service, the rule (or better:
    friendly tip) that one need not but probably ought to follow is this:

    ::: {.extract}
    Be an active member of the Instagram community to receive likes and
    comments. Take time to comment on a friend\'s photo, or to like photos.
    If you do this, others will reciprocate. If you never acknowledge your
    followers\' photos, then they won\'t acknowledge
    you.[^49^](#c2-note-0049){#c2-note-0049a}
    :::

    The context of this widespread and highly conventional piece of advice
    is not, for instance, a professional marketing campaign; it is simply
    about personally positioning oneself within a social network. The goal
    is to establish one\'s own, singular, identity. The process required to
    do so is not primarily inward-oriented; it is not based on questions
    such as: "Who am I really, apart from external influences?" It is rather
    outward-oriented. It takes place through making connections with others
    and is concerned with questions such as: "Who is in my network, and what
    is my position within it?" It is []{#Page_86 type="pagebreak"
    title="86"}revealing that none of the tips in the collection cited above
    offers advice about achieving success within a community of
    photographers; there are not suggestions, for instance, about how to
    take high-quality photographs. With smart cameras and built-in filters
    for post-production, this is not especially challenging any more,
    especially because individual pictures, to be examined closely and on
    their own terms, have become less important gauges of value than streams
    of images that are meant to be quickly scrolled through. Moreover, the
    function of the critic, who once monopolized the right to interpret and
    evaluate an image for everyone, is no longer of much significance.
    Instead, the quality of a picture is primarily judged according to
    whether "others like it"; that is, according to its performance in the
    ongoing popularity contest within a specific niche. But users do not
    rely on communal formations and the feedback they provide just for the
    sharing and evaluation of pictures. Rather, this dynamic has come to
    determine more and more facets of life. Users experience the
    constitution of singularity and communality, in which a person can be
    perceived as such, as simultaneous and reciprocal processes. A million
    times over and nearly subconsciously (because it is so commonplace),
    they engage in a relationship between the individual and others that no
    longer really corresponds to the liberal opposition between
    individuality and society, between personal and group identity. Instead
    of viewing themselves as exclusive entities (either in terms of the
    emphatic affirmation of individuality or its dissolution within a
    homogeneous group), the new formations require that the production of
    difference and commonality takes place
    simultaneously.[^50^](#c2-note-0050){#c2-note-0050a}
    :::

    ::: {.section}
    ### Authenticity and subjectivity {#c2-sec-0012}

    Because members have decided to participate voluntarily in the
    community, their expressions and actions are regarded as authentic, for
    it is implicitly assumed that, in making these gestures, they are not
    following anyone else\'s instructions but rather their own motivations.
    The individual does not act as a representative or functionary of an
    organization but rather as a private and singular (that is, unique)
    person. While at a gathering of the Occupy movement, a sure way to be
    kicked out to is to stick stubbornly to a party line, even if this way
    []{#Page_87 type="pagebreak" title="87"}of thinking happens to agree
    with that of the movement. Not only at Occupy gatherings, however, but
    in all new communal formations it is expected that everyone there is
    representing his or her own interests. As most people are aware, this
    assumption is theoretically naïve and often proves to be false in
    practice. Even spontaneity can be calculated, and in many cases it is.
    Nevertheless, the expectation of authenticity is relevant because it
    creates a minimum of trust. As the basis of social trust, such
    contra-factual expectations exist elsewhere as well. Critical readers of
    newspapers, for instance, must assume that what they are reading has
    been well researched and is presented as objectively as possible, even
    though they know that objectivity is theoretically a highly problematic
    concept -- to this extent, postmodern theory has become common knowledge
    -- and that newspapers often pursue (hidden) interests or lead
    campaigns. Yet without such contra-factual assumptions, the respective
    orders of knowledge and communication would not function, for they
    provide the normative framework within which deviations can be
    perceived, criticized, and sanctioned.

    In a seemingly traditional manner, the "authentic self" is formulated
    with reference to one\'s inner world, for instance to personal
    knowledge, interests, or desires. As the core of personality, however,
    this inner world no longer represents an immutable and essential
    characteristic but rather a temporary position. Today, even someone\'s
    radical reinvention can be regarded as authentic. This is the central
    difference from the classical, bourgeois conception of the subject. The
    self is no longer understood in essentialist terms but rather
    performatively. Accordingly, the main demand on the individual who
    voluntarily opts to participate in a communal formation is no longer to
    be self-aware but rather to be
    self-motivated.[^51^](#c2-note-0051){#c2-note-0051a} Nor is it necessary
    any more for one\'s core self to be coherent. It is not a contradiction
    to appear in various communal formations, each different from the next,
    as a different "I myself," for every formation is comprehensive, in that
    it appeals to the whole person, and simultaneously partial, in that it
    is oriented toward a particular goal and not toward all areas of life.
    As in the case of re-mixes and other referential processes, the concern
    here is not to preserve authenticity but rather to create it in the
    moment. The success or failure []{#Page_88 type="pagebreak"
    title="88"}of these efforts is determined by the continuous feedback of
    others -- one like after another.

    These practices have led to a modified form of subject constitution for
    which some sociologists, engaged in empir­ical research, have introduced
    the term "networked individualism."[^52^](#c2-note-0052){#c2-note-0052a}
    The idea is based on the observation that people in Western societies
    (the case studies were mostly in North America) are defining their
    identity less and less by their family, profession, or other stable
    collective, but rather increasingly in terms of their personal social
    networks; that is, according to the communal formations in which they
    are active as individuals and in which they are perceived as singular
    people. In this regard, individualization and atomization no longer
    necessarily go hand in hand. On the contrary, the intertwined nature of
    personal identity and communality can be experienced on an everyday
    level, given that both are continuously created, adapted, and affirmed
    by means of personal communication. This makes the networks in question
    simultaneously fragile and stable. Fragile because they require the
    ongoing presence of every individual and because communication can break
    down quickly. Stable because the networks of relationships that can
    support a single person -- as regards the number of those included,
    their geograph­ical distribution, and the duration of their cohesion --
    have expanded enormously by means of digital communication technologies.

    Here the issue is not that of close friendships, whose number remains
    relatively constant for most people and over long periods of
    time,[^53^](#c2-note-0053){#c2-note-0053a} but rather so-called "weak
    ties"; that is, more or less loose acquaintances that can be tapped for
    new information and resources that do not exist within one\'s close
    circle of friends.[^54^](#c2-note-0054){#c2-note-0054a} The more they
    are expanded, the more sustainable and valuable these networks become,
    for they bring together a large number of people and thus multiply the
    material and organizational resources that are (potentially) accessible
    to the individual. It is impossible to make a sweeping statement as to
    whether these formations actually represent communities in a
    comprehensive sense and how stable they really are, especially in times
    of crisis, for this is something that can only be found out on a
    case-by-case basis. It is relevant that the development of personal
    networks []{#Page_89 type="pagebreak" title="89"}has not taken place in
    a vacuum. The disintegration of institutions that were formerly
    influential in the formation of identity and meaning began long before
    the large-scale spread of networks. For most people, there is no other
    choice but to attempt to orient and organize oneself, regardless of how
    provisional or uncertain this may be. Or, as Manuel Castells somewhat
    melodramatically put it, "At the turn of the millennium, the king and
    the queen, the state and civil society, are both naked, and their
    children-citizens are wandering around a variety of foster
    homes."[^55^](#c2-note-0055){#c2-note-0055a}
    :::

    ::: {.section}
    ### Space and time as a communal practice {#c2-sec-0013}

    Although participation in a communal formation is voluntary, it is not
    unselfish. Quite the contrary: an important motivation is to gain access
    to a formation\'s constitutive field of practice and to the resources
    associated with it. A communal formation ultimately does more than
    simply steer the attention of its members toward one another. Through
    the common production of culture, it also structures how the members
    perceive the world and how they are able to design themselves and their
    potential actions in it. It is thus a co­operative mechanism of
    filtering, interpretation, and constitution. Through the everyday
    referential work of its members, the community selects a manageable
    amount of information from the excess of potentially available
    information and brings it into a meaningful context, whereby it
    validates the selection itself and orients the activity of each of its
    members.

    The new communal formations consist of self-referential worlds whose
    constructive common practice affects the foundations of social activity
    itself -- the constitution of space and time. How? The spatio-temporal
    horizon of digital communication is a global (that is, placeless) and
    ongoing present. The technical vision of digital communication is always
    the here and now. With the instant transmission of information,
    everything that is not "here" is inaccessible and everything that is not
    "now" has disappeared. Powerful infrastructure has been built to achieve
    these effects: data centers, intercontinental networks of cables,
    satellites, high-performance nodes, and much more. Through globalized
    high-frequency trading, actors in the financial markets have realized
    this []{#Page_90 type="pagebreak" title="90"}technical vision to its
    broadest extent by creating a never-ending global present whose expanse
    is confined to milliseconds. This process is far from coming to an end,
    for massive amounts of investment are allocated to accomplish even the
    smallest steps toward this goal. On November 3, 2015, a 4,600-kilometer,
    300-million-dollar transatlantic telecommunications cable (Hibernia
    Express) was put into operation between London and New York -- the first
    in more than 10 years -- with the single goal of accelerating automated
    trading between the two places by 5.2 milliseconds.

    For social and biological processes, this technical horizon of space and
    time is neither achievable nor desirable. Such processes, on the
    contrary, are existentially dependent on other spatial and temporal
    orders. Yet because of the existence of this non-geographical and
    atemporal horizon, the need -- as well as the possibility -- has arisen
    to redefine the parameters of space and time themselves in order to
    counteract the mire of technically defined spacelessness and
    timelessness. If space and time are not simply to vanish in this
    spaceless, ongoing present, how then should they be defined? Communal
    formations create spaces for action not least by determining their own
    geographies and temporal rhythms. They negotiate what is near and far
    and also which places are disregarded (that is, not even perceived). If
    every place is communicatively (and physically) reachable, every person
    must decide which place he or she would like to reach in practice. This,
    however, is not an individual decision but rather a task that can only
    be approached collectively. Those places which are important and thus
    near are determined by communal formations. This takes place in the form
    of a rough consensus through the blogs that "one" has to read, the
    exhibits that "one" has to see, the events and conferences that "one"
    has to attend, the places that "one" has to visit before they are
    overrun by tourists, the crises in which "the West" has to intervene,
    the targets that "lend themselves" to a terrorist attack, and so on. On
    its own, however, selection is not enough. Communal formations are
    especially powerful when they generate the material and organizational
    resources that are necessary for their members to implement their shared
    worldview through actions -- to visit, for instance, the places that
    have been chosen as important. This can happen if they enable access
    []{#Page_91 type="pagebreak" title="91"}to stipends, donations, price
    reductions, ride shares, places to stay, tips, links, insider knowledge,
    public funds, airlifts, explosives, and so on. It is in this way that
    each formation creates its respective spatial constructs, which define
    distances in a great variety of ways. At the same time that war-torn
    Syria is unreachably distant even for seasoned reporters and their
    staff, veritable travel agencies are being set up in order to bring
    Western jihadists there in large numbers.

    Things are similar for the temporal dimensions of social and biological
    processes. Permanent presence is a temporality that is inimical to life
    but, under its influence, temporal rhythms have to be redefined as well.
    What counts as fast? What counts as slow? In what order should things
    proceed? On the everyday level, for instance, the matter can be as
    simple as how quickly to respond to an email. Because the transmission
    of information hardly takes any time, every delay is a purely social
    creation. But how much is acceptable? There can be no uniform answer to
    this. The members of each communal formation have to negotiate their own
    rules with one another, even in areas of life that are otherwise highly
    formalized. In an interview with the magazine *Zeit*, for instance, a
    lawyer with expertise in labor law was asked whether a boss may require
    employees to be reachable at all times. Instead of answering by
    referring to any binding legal standards, the lawyer casually advised
    that this was a matter of flexible negotiation: "Express your misgivings
    openly and honestly about having to be reachable after hours and,
    together with your boss, come up with an agreeable rule to
    follow."[^56^](#c2-note-0056){#c2-note-0056a} If only it were that easy.

    Temporalities that, in many areas, were once simply taken for granted by
    everyone on account of the factuality of things now have to be
    culturally determined -- that is, explicitly negotiated -- in a greater
    number of contexts. Under the conditions of capitalism, which is always
    creating new competitions and incentives, one consequence is the
    often-lamented "acceleration of time." We are asked to produce, consume,
    or accomplish more and more in less and less
    time.[^57^](#c2-note-0057){#c2-note-0057a} This change in the
    structuring of time is not limited to linear acceleration. It reaches
    deep into the foundations of life and has even reconfigured biological
    processes themselves. Today there is an entire industry that specializes
    in freezing the stem []{#Page_92 type="pagebreak" title="92"}cells of
    newborns in liquid nitrogen -- that is, in suspending cellular
    biological time -- in case they might be needed later on in life for a
    transplant or for the creation of artificial organs. Children can be
    born even if their physical mothers are already dead. Or they can be
    "produced" from ova that have been stored for many years at minus 196
    degrees.[^58^](#c2-note-0058){#c2-note-0058a} At the same time,
    questions now have to be addressed every day whose grand temporal
    dimensions were once the matter of myth. In the case of atomic energy,
    for instance, there is the issue of permanent disposal. Where can we
    deposit nuclear waste for the next hundred thousand years without it
    causing catastrophic damage? How can the radioactive material even be
    transported there, wherever that is, within the framework of everday
    traffic laws?[^59^](#c2-note-0059){#c2-note-0059a}

    The construction of temporal dimensions and sequences has thus become an
    everyday cultural question. Whereas throughout Europe, for example,
    committees of experts and ethicists still meet to discuss reproductive
    medicine and offer their various recommendations, many couples are
    concerned with the specific question of whether or how they can fulfill
    their wish to have children. Without a coherent set of rules, questions
    such as these have to be answered by each individual with recourse to
    his or her personally relevant communal formation. If there is no
    cultural framework that at least claims to be binding for everyone, then
    the individual must negotiate independently within each communal
    formation with the goal of acquiring the resources necessary to act
    according to communal values and objectives.
    :::

    ::: {.section}
    ### Self-generating orders {#c2-sec-0014}

    These three functions -- selection, interpretation, and the constitutive
    ability to act -- make communal formations the true subject of the
    digital condition. In principle, these functions are nothing new;
    rather, they are typical of fields that are organized without reference
    to external or irrefutable authorities. The state of scholarship, for
    instance, is determined by what is circulated in refereed publications.
    In this case, "refereed" means that scientists at the same professional
    rank mutually evaluate each other\'s work. The scientific community (or
    better: the sub-community of a specialized discourse) []{#Page_93
    type="pagebreak" title="93"}evaluates the contributions of individual
    scholars. They decide what should be considered valuable, and this
    consensus can theoretically be revised at any time. It is based on a
    particular catalog of criteria, on an interpretive framework that
    provides lines of inquiry, methods, appraisals, and conventions of
    presentation. With every article, this framework is confirmed and
    reconstituted. If the framework changes, this can lead in the most
    extreme case to a paradigm shift, which overturns fundamental
    orientations, assumptions, and
    certainties.[^60^](#c2-note-0060){#c2-note-0060a} The result of this is
    not only a change in how scientific contributions are evaluated but also
    a change in how the external world is perceived and what activities are
    possible in it. Precisely because the sciences claim to define
    themselves, they have the ability to revise their own foundations.

    The sciences were the first large sphere of society to achieve
    comprehensive cultural autonomy; that is, the ability to determine its
    own binding meaning. Art was the second that began to organize itself on
    the basis of internal feedback. It was during the era of Romanticism
    that artists first laid claim to autonomy. They demanded "to absolve art
    from all conditions, to represent it as a realm -- indeed as the only
    realm -- in which truth and beauty are expressed in their pure form, a
    realm in which everything truly human is
    transcended."[^61^](#c2-note-0061){#c2-note-0061a} With the spread of
    photography in the second half of the nineteenth century, art also
    liberated itself from its final task, which was hoisted upon it from the
    outside, namely the need to represent external reality. Instead of
    having to represent the external world, artists could now focus on their
    own subjectivity. This gave rise to a radical individualism, which found
    its clearest summation in Marcel Duchamp\'s assertion that only the
    artist could determine what is art. This he claimed in 1917 by way of
    explaining how an industrially produced urinal, exhibited as a signed
    piece with the title "Fountain," could be considered a work of art.

    With the rise of the knowledge economy and the expansion of cultural
    fields, including the field of art and the artists active within it,
    this individualism quickly swelled to unmanageable levels. As a
    consequence, the task of defining what should be regarded as art shifted
    from the individual artist to the curator. It now fell upon the latter
    to select a few works from the surplus of competing scenes and thus
    bring temporary []{#Page_94 type="pagebreak" title="94"}order to the
    constantly diversifying and changing world of contemporary art. This
    order was then given expression in the form of exhibits, which were
    intended to be more than the sum of their parts. The beginning of this
    practice can be traced to the 1969 exhibition When Attitudes Become
    Form, which was curated by Harald Szeemann for the Kunsthalle Bern (it
    was also sponsored by Philip Morris). The works were not neatly
    separated from one another and presented without reference to their
    environment, but were connected with each other both spatially and in
    terms of their content. The effect of the exhibition could be felt at
    least as much through the collection of works as a whole as it could
    through the individual pieces, many of which had been specially
    commissioned for the exhibition itself. It not only cemented Szeemann\'s
    reputation as one of the most significant curators of the twentieth
    century; it also completely redefined the function of the curator as a
    central figure within the art system.

    This was more than 40 years ago and in a system that functioned
    differently from that of today. The distance from this exhibition, but
    also its ongoing relevance, was negotiated, significantly, in a
    re-enactment at the 2013 Biennale in Venice. For this, the old rooms at
    the Kunsthalle Bern were reconstructed in the space of the Fondazione
    Prada in such a way that both could be seen simultaneously. As is
    typical with such re-enactments, the curators of the project described
    its goals in terms of appropriation and distancing: "This was the
    challenge: how could we find and communicate a limit to a non-limit,
    creating a place that would reflect exactly the architectural structures
    of the Kunsthalle, but also an asymmetrical space with respect to our
    time and imbued with an energy and tension equivalent to that felt at
    Bern?"[^62^](#c2-note-0062){#c2-note-0062a}

    Curation -- that is, selecting works and associating them with one
    another -- has become an omnipresent practice in the art system. No
    exhibition takes place any more without a curator. Nevertheless,
    curators have lost their extraordinary
    position,[^63^](#c2-note-0063){#c2-note-0063a} with artists taking on
    more of this work themselves, not only because the boundaries between
    artistic and curatorial activities have become fluid but also because
    many artists explicitly co-produce the context of their work by
    incorporating a multitude of references into their pieces. It is with
    precisely this in mind that André Rottmann, in the []{#Page_95
    type="pagebreak" title="95"}quotation cited at the beginning of this
    chapter, can assert that referentiality has become the dominant
    production-aesthetic model in contemporary art. This practice enables
    artists to objectify themselves by explicitly placing themselves into a
    historical and social context. At the same time, it also enables them to
    subjectify the historical and social context by taking the liberty to
    select and arrange the references
    themselves.[^64^](#c2-note-0064){#c2-note-0064a}

    Such strategies are no longer specific to art. Self-generated spaces of
    reference and agency are now deeply embedded in everyday life. The
    reason for this is that a growing number of questions can no longer be
    answered in a generally binding way (such as those about what
    constitutes fine art), while the enormous expansion of the cultural
    requires explicit decisions to be made in more aspects of life. The
    reaction to this dilemma has been radical subjectivation. This has not,
    however, been taking place at the level of the individual but rather at
    that of communal formations. There is now a patchwork of answers to
    large questions and a multitude of reactions to large challenges, all of
    which are limited in terms of their reliability and scope.
    :::

    ::: {.section}
    ### Ambivalent voluntariness {#c2-sec-0015}

    Even though participation in new formations is voluntary and serves the
    interests of their members, it is not without preconditions. The most
    important of these is acceptance, the willing adoption of the
    interpretive framework that is generated by the communal formation. The
    latter is formed from the social, cultural, legal, and technical
    protocols that lend to each of these formations its concrete
    constitution and specific character. Protocols are common sets of rules;
    they establish, according to the network theorist Alexander Galloway,
    "the essential points necessary to enact an agreed-upon standard of
    action." They provide, he goes on, "etiquette for autonomous
    agents."[^65^](#c2-note-0065){#c2-note-0065a} Protocols are
    simul­taneously voluntary and binding; they allow actors to meet
    eye-to-eye instead of entering into hierarchical relations with one
    another. If everyone voluntarily complies with the protocols, then it is
    not necessary for one actor to give instructions to another. Whoever
    accepts the relevant protocols can interact with others who do the same;
    whoever opts not to []{#Page_96 type="pagebreak" title="96"}accept them
    will remain on the outside. Protocols establish, for example, common
    languages, technical standards, or social conventions. The fundamental
    protocol for the internet is the Transmission Control Protocol/Internet
    Protocol (TCP/IP). This suite of protocols defines the common language
    for exchanging data. Every device that exchanges information over the
    internet -- be it a smartphone, a supercomputer in a data center, or a
    networked thermostat -- has to use these protocols. In growing areas of
    social contexts, the common language is English. Whoever wishes to
    belong has to speak it increasingly often. In the natural sciences,
    communication now takes place almost exclusively in English. Non-native
    speakers who accept this norm may pay a high price: they have to learn a
    new language and continually improve their command of it or else resign
    themselves to being unable to articulate things as they would like --
    not to mention losing the possibility of expressing something for which
    another language would perhaps be more suitable, or forfeiting
    trad­itions that cannot be expressed in English. But those who refuse to
    go along with these norms pay an even higher price, risking
    self-marginalization. Those who "voluntarily" accept conventions gain
    access to a field of practice, even though within this field they may be
    structurally disadvantaged. But unwillingness to accept such
    conventions, with subsequent denial of access to this field, might have
    even greater disadvantages.[^66^](#c2-note-0066){#c2-note-0066a}

    In everyday life, the factors involved with this trade-off are often
    presented in the form of subtle cultural codes. For instance, in order
    to participate in a project devoted to the development of free software,
    it is not enough for someone to possess the necessary technical
    knowledge; he or she must also be able to fit into a wide-ranging
    informal culture with a characteristic style of expression, humor, and
    preferences. Ultimately, software developers do not form a professional
    corps in the traditional sense -- in which functionaries meet one
    another in the narrow and regulated domain of their profession -- but
    rather a communal formation in which the engagement of the whole person,
    both one\'s professional and social self, is scrutinized. The
    abolishment of the separ­ation between different spheres of life,
    requiring interaction of a more holistic nature, is in fact a key
    attraction of []{#Page_97 type="pagebreak" title="97"}these communal
    formations and is experienced by some as a genuine gain in freedom. In
    this situation, one is no longer subjected to rules imposed from above
    but rather one is allowed to -- and indeed ought to -- be authentically
    pursuing his or her own interests.

    But for others the experience can be quite the opposite because the
    informality of the communal formation also allows forms of exclusion and
    discrimination that are no longer acceptable in formally organized
    realms of society. Discrimination is more difficult to identify when it
    takes place within the framework of voluntary togetherness, for no one
    is forced to participate. If you feel uncomfortable or unwelcome, you
    are free to leave at any time. But this is a specious argument. The
    areas of free software or Wikipedia are difficult places for women. In
    these clubby atmospheres of informality, they are often faced with
    blatant sexism, and this is one of the reasons why many women choose to
    stay away from such projects.[^67^](#c2-note-0067){#c2-note-0067a} In
    2007, according to estimates by the American National Center for Women &
    Information Technology, whereas approximately 27 percent of all jobs
    related to computer science were held by women, their representation at
    the same time was far lower in the field of free software -- on average
    less than 2 percent. And for years, the proportion of women who edit
    texts on Wikipedia has hovered at around 10
    percent.[^68^](#c2-note-0068){#c2-note-0068a}

    The consequences of such widespread, informal, and elusive
    discrimination are not limited to the fact that certain values and
    prejudices of the shared culture are included in these products, while
    different viewpoints and areas of knowledge are
    excluded.[^69^](#c2-note-0069){#c2-note-0069a} What is more, those who
    are excluded or do not wish to expose themselves to discrimination (and
    thus do not even bother to participate in any communal formations) do
    not receive access to the resources that circulate there (attention and
    support, valuable and timely knowledge, or job offers). Many people are
    thus faced with the choice of either enduring the discrimination within
    a community or remaining on the outside and thus invisible. That this
    decision is made on a voluntary basis and on one\'s own responsibility
    hardly mitigates the coercive nature of the situation. There may be a
    choice, but it would be misleading to call it a free one.[]{#Page_98
    type="pagebreak" title="98"}
    :::

    ::: {.section}
    ### The power of sociability {#c2-sec-0016}

    In order to explain the peculiar coercive nature of the (nom­inally)
    voluntary acceptance of protocols, rules, and norms, the political
    scientist David Singh Grewal, drawing on the work of Max Weber and
    Michel Foucault, has distinguished between the "power of sovereignty"
    and the "power of sociabil­ity."[^70^](#c2-note-0070){#c2-note-0070a}
    The former develops on the basis of dominance and subordination, as
    imposed by authorities, police officers, judges, or other figures within
    formal hierarchies. Their power is anchored in disciplinary
    institutions, and the dictum of this sort of power is: "You must!" The
    power of sociability, on the contrary, functions by prescribing the
    conditions or protocols under which people are able to enter into an
    exchange with one another. The dictum of this sort of power is: "You
    can!" The more people accept certain protocols and standards, the more
    powerful these become. Accordingly, the sociability that they structure
    also becomes more comprehensive, and those not yet involved have to ask
    themselves all the more urgently whether they can afford not to accept
    these protocols and standards. Whereas the first type of power is
    ultimately based on the monopoly of violence and on repression, the
    second is founded on voluntary submission. When the entire internet
    speaks TCP/IP, then an individual\'s decision to use it may be voluntary
    in nominal terms, but at the same time it is an indispensable
    precondition for existing within the network at all. Protocols exert
    power without there having to be anyone present to possess the power in
    question. Whereas the sovereign can be located, the effects of
    sociability\'s power are diffuse and omnipresent. They are not
    repressive but rather constitutive. No one forces a scientist to publish
    in English or a woman editor to tolerate disparaging remarks on
    Wikipedia. People accept these often implicit behavioral norms (sexist
    comments are permitted, for instance) out of their own interests in
    order to acquire access to the resources circulating within the networks
    and to constitute themselves within it. In this regard, Singh
    distinguishes between the "intrinsic" and "extrinsic" reasons for
    abiding by certain protocols.[^71^](#c2-note-0071){#c2-note-0071a} In
    the first case, the motivation is based on a new protocol being better
    suited than existing protocols for carrying out []{#Page_99
    type="pagebreak" title="99"}a specific objective. People thus submit
    themselves to certain rules because they are especially efficient,
    transparent, or easy to use. In the second case, a protocol is accepted
    not because but in spite of its features. It is simply a precondition
    for gaining access to a space of agency in which resources and
    opportunities are available that cannot be found anywhere else. In the
    first case, it is possible to speak subjectively of voluntariness,
    whereas the second involves some experience of impersonal compunction.
    One is forced to do something that might potentially entail grave
    disadvantages in order to have access, at least, to another level of
    opportunities or to create other advantages for oneself.
    :::

    ::: {.section}
    ### Homogeneity, difference and authority {#c2-sec-0017}

    Protocols are present on more than a technical level; as interpretive
    frameworks, they structure viewpoints, rules, and patterns of behavior
    on all levels. Thus, they provide a degree of cultural homogeneity, a
    set of commonalities that lend these new formations their communal
    nature. Viewed from the outside, these formations therefore seem
    inclined toward consensus and uniformity, for their members have already
    accepted and internalized certain aspects in common -- the protocols
    that enable exchange itself -- whereas everyone on the outside has not
    done so. When everyone is speaking in English, the conversation sounds
    quite monotonous to someone who does not speak the language.

    Viewed from the inside, the experience is something different: in order
    to constitute oneself within a communal formation, not only does one
    have to accept its rules voluntarily and in a self-motivated manner; one
    also has to make contributions to the reproduction and development of
    the field. Everyone is urged to contribute something; that is, to
    produce, on the basis of commonalities, differences that simultaneously
    affirm, modify, and enhance these commonalities. This leads to a
    pronounced and occasionally highly competitive internal differentiation
    that can only be understood, however, by someone who has accepted the
    commonalities. To an outsider, this differentiation will seem
    irrelevant. Whoever is not well versed in the universe of *Star Wars*
    will not understand why the various character interpretations at
    []{#Page_100 type="pagebreak" title="100"}cosplay conventions, which I
    discussed above, might be brilliant or even controversial. To such a
    person, they will all seem equally boring and superficial.

    These formations structure themselves internally through the production
    of differences; that is, by constantly changing their common ground.
    Those who are able to add many novel aspects to the common resources
    gain a degree of authority. They assume central positions and they
    influence, through their behavior, the development of the field more
    than others do. However, their authority, influence, and de facto power
    are not based on any means of coercion. As Niklas Luhmann noted, "In the
    end, one participant\'s achievements in making selections \[...\] are
    accepted by another participant \[...\] as a limitation of the latter\'s
    potential experiences and activities without him having to make the
    selection on his own."[^72^](#c2-note-0072){#c2-note-0072a} Even this is
    a voluntary and self-interested act: the members of the formation
    recognize that this person has contributed more to the common field and
    to the resources within it. This, in turn, is to everyone\'s advantage,
    for each member would ultimately like to make use of the field\'s
    resources to achieve his or her own goals. This arrangement, which can
    certainly take on hierarchical qualities, is experienced as something
    meritocratically legitimized and voluntarily
    accepted.[^73^](#c2-note-0073){#c2-note-0073a} In the context of free
    software, there has therefore been some discussion of "benevolent
    dictators."[^74^](#c2-note-0074){#c2-note-0074a} The matter of
    "dictators" is raised because projects are often led by charismatic
    figures without a formal mandate. They are "benevolent" because their
    pos­ition of authority is based on the fact that a critical mass of
    participating producers has voluntarily subordinated itself for its own
    self-interest. If the consensus breaks over whose contributions have
    been carrying the most weight, then the formation will be at risk of
    losing its internal structure and splitting apart ("forking," in the
    jargon of free software).
    :::
    :::

    ::: {.section}
    Algorithmicity {#c2-sec-0018}
    --------------

    Through personal communication, referential processes in communal
    formations create cultural zones of various sizes and scopes. They
    expand into the empty spaces that have been created by the erosion of
    established institutions and []{#Page_101 type="pagebreak"
    title="101"}processes, and once these new processes have been
    established the process of erosion intensifies. Multiple processes of
    exchange take place alongside one another, creating a patchwork of
    interconnected, competing, or entirely unrelated spheres of meaning,
    each with specific goals and resources and its own preconditions and
    potentials. The structures of knowledge, order, and activity that are
    generated by this are holistic as well as partial and limited. The
    participants in such structures are simultaneously addressed on many
    levels that were once functionally separated; previously independent
    spheres, such as work and leisure, are now mixed together, but usually
    only with respect to the subdivisions of one\'s own life. And, at first,
    the structures established in this way are binding only for active
    participants.

    ::: {.section}
    ### Exiting the "Library of Babel" {#c2-sec-0019}

    For one person alone, however, these new processes would not be able to
    generate more than a local island of meaning from the enormous clamor of
    chaotic spheres of information. In his 1941 story "The Library of
    Babel," Jorge Luis Borges fashioned a fitting image for such a
    situation. He depicts the world as a library of unfathomable and
    possibly infinite magnitude. The characters in the story do not know
    whether there is a world outside of the library. There are reasons to
    believe that there is, and reasons that suggest otherwise. The library
    houses the complete collection of all possible books that can be written
    on exactly 410 pages. Contained in these volumes is the promise that
    there is "no personal or universal problem whose eloquent solution
    \[does\] not exist," for every possible combination of letters, and thus
    also every possible pronouncement, is recorded in one book or another.
    No catalog has yet been found for the library (though it must exist
    somewhere), and it is impossible to identify any order in its
    arrangement of books. The "men of the library," according to Borges,
    wander round in search of the one book that explains everything, but
    their actual discoveries are far more modest. Only once in a while are
    books found that contain more than haphazard combinations of signs. Even
    small regularities within excerpts of texts are heralded as sensational
    discoveries, and it is around these discoveries that competing
    []{#Page_102 type="pagebreak" title="102"}schools of interpretation
    develop. Despite much labor and effort, however, the knowledge gained is
    minimal and fragmentary, so the prevailing attitude in the library is
    bleak. By the time of the narrator\'s generation, "nobody expects to
    discover anything."[^75^](#c2-note-0075){#c2-note-0075a}

    Although this vision has now been achieved from a quantitative
    perspective -- no one can survey the "library" of digital information,
    which in practical terms is infinitely large, and all of the growth
    curves continue to climb steeply -- today\'s cultural reality is
    nevertheless entirely different from that described by Borges. Our
    ability to deal with massive amounts of data has radically improved, and
    thus our faith in the utility of information is not only unbroken but
    rather gaining strength. What is new is precisely such large quantities
    of data ("big data"), which, as we are promised or forewarned, will lead
    to new knowledge, to a comprehensive understanding of the world, indeed
    even to "omniscience."[^76^](#c2-note-0076){#c2-note-0076a} This faith
    in data is based above all on the fact that the two processes described
    above -- referentiality and communality -- are not the only new
    mechanisms for filtering, sorting, aggregating, and evaluating things.
    Beneath or ahead of the social mechanisms of decentralized and networked
    cultural production, there are algorithmic processes that pre-sort the
    immeasurably large volumes of data and convert them into a format that
    can be apprehended by individuals, evaluated by communities, and
    invested with meaning.

    Strictly speaking, it is impossible to maintain a categorical
    distinction between social processes that take place in and by means of
    technological infrastructures and technical pro­cesses that are socially
    constructed. In both cases, social actors attempt to realize their own
    interests with the resources at their disposal. The methods of
    (attempted) realization, the available resources, and the formulation of
    interests mutually influence one another. The technological resources
    are inscribed in the formulation of goals. These open up fields of
    imagination and desire, which in turn inspire technical
    development.[^77^](#c2-note-0077){#c2-note-0077a} Although it is
    impossible to draw clear theoretical lines, the attempt to make such a
    distinction can nevertheless be productive in practice, for in this way
    it is possible to gain different perspectives about the same object of
    investigation.[]{#Page_103 type="pagebreak" title="103"}
    :::

    ::: {.section}
    ### The rise of algorithms {#c2-sec-0020}

    An algorithm is a set of instructions for converting a given input into
    a desired output by means of a finite number of steps: algorithms are
    used to solve predefined problems. For a set of instructions to become
    an algorithm, it has to be determined in three different respects.
    First, the necessary steps -- individually and as a whole -- have to be
    described unambiguously and completely. To do this, it is usually
    neces­sary to use a formal language, such as mathematics, or a
    programming language, in order to avoid the characteristic imprecision
    and ambiguity of natural language and to ensure instructions can be
    followed without interpretation. Second, it must be possible in practice
    to execute the individual steps together. For this reason, every
    algorithm is tied to the context of its realization. If the context
    changes, so do the operating processes that can be formalized as
    algorithms and thus also the ways in which algorithms can partake in the
    constitution of the world. Third, it must be possible to execute an
    operating instruction mechanically so that, under fixed conditions, it
    always produces the same result.

    Defined in such general terms, it would also be possible to understand
    the instruction manual for a typical piece of Ikea furniture as an
    algorithm. It is a set of instructions for creating, with a finite
    number of steps, a specific and predefined piece of furniture (output)
    from a box full of individual components (input). The instructions are
    composed in a formal language, pictograms, which define each step as
    unambiguously as possible, and they can be executed by a single person
    with simple tools. The process can be repeated, for the final result is
    always the same: a Billy box will always yield a Billy shelf. In this
    case, a person takes over the role of a machine, which (unambiguous
    pictograms aside) can lead to problems, be it that scratches and other
    traces on the finished piece of furniture testify to the unique nature
    of the (unsuccessful) execution, or that, inspired by the micro-trend of
    "Ikea hacking," the official instructions are intentionally ignored.

    Because such imprecision is supposed to be avoided, the most important
    domain of algorithms in practice is mathematics and its implementation
    on the computer. The term []{#Page_104 type="pagebreak"
    title="104"}"algorithm" derives from the Persian mathematician,
    astronomer, and geographer Muḥammad ibn Mūsā al-Khwārizmī. His book *On
    the Calculation with Hindu Numerals*, which was written in Baghdad in
    825, was known widely in the Western Middle Ages through a Latin
    translation and made the essential contribution of introducing
    Indo-Arabic nu­merals and the number zero to Europe. The work begins
    with the formula *dixit algorizmi* ... ("Algorismi said ..."). During
    the Middle Ages, *algorizmi* or *algorithmi* soon became a general term
    for advanced methods of
    calculation.[^78^](#c2-note-0078){#c2-note-0078a}

    The modern effort to build machines that could mechanic­ally carry out
    instructions achieved its first breakthrough with Gottfried Wilhelm
    Leibniz. He has often been credited with making the following remark:
    "It is unworthy of excellent men to lose hours like slaves in the labour
    of calculation which could be done by any peasant with the aid of a
    machine."[^79^](#c2-note-0079){#c2-note-0079a} This vision already
    contains a distinction between higher cognitive and interpretive
    activities, which are regarded as being truly human, and lower processes
    that involve pure execution and can therefore be mechanized. To this
    end, Leibniz himself developed the first calculating machine, which
    could carry out all four of the basic types of arithmetic. He was not
    motivated to do this by the practical necessities of production and
    business (although conceptually groundbreaking, Leibniz\'s calculating
    machine remained, on account of its mechanical complexity, a unique item
    and was never used).[^80^](#c2-note-0080){#c2-note-0080a} In the
    estimation of the philosopher Sybille Krämer, calculating machines "were
    rather speculative masterpieces of a century that, like none before it,
    was infatuated by the idea of mechanizing 'intellectual'
    processes."[^81^](#c2-note-0081){#c2-note-0081a} Long before machines
    were implemented on a large scale to increase the efficiency of material
    production, Leibniz had already speculated about using them to enhance
    intellectual labor. And this vision has never since disappeared. Around
    a century and a half later, the English polymath Charles Babbage
    formulated it anew, now in direct connection with industrial
    mechanization and its imperative of time-saving
    efficiency.[^82^](#c2-note-0082){#c2-note-0082a} Yet he, too, failed to
    overcome the problem of practically realizing such a machine.

    The decisive step that turned the vision of calculating machines into
    reality was made by Alan Turing in 1937. With []{#Page_105
    type="pagebreak" title="105"}a theoretical model, he demonstrated that
    every algorithm could be executed by a machine as long as it could read
    an incremental set of signs, manipulate them according to established
    rules, and then write them out again. The validity of his model did not
    depend on whether the machine would be analog or digital, mechanical or
    electronic, for the rules of manipulation were not at first conceived as
    being a fixed component of the machine itself (that is, as being
    implemented in its hardware). The electronic and digital approach came
    to be preferred because it was hoped that even the instructions could be
    read by the machine itself, so that the machine would be able to execute
    not only one but (theoretically) every written algorithm. The
    Hungarian-born mathematician John von Neumann made it his goal to
    implement this idea. In 1945, he published a model in which the program
    (the algorithm) and the data (the input and output) were housed in a
    common storage device. Thus, both could be manipulated simultaneously
    without having to change the hardware. In this way, he converted the
    "Turing machine" into the "universal Turing machine"; that is, the
    modern computer.[^83^](#c2-note-0083){#c2-note-0083a}

    Gordon Moore, the co-founder of the chip manufacturer Intel,
    prognosticated 20 years later that the complexity of integrated circuits
    and thus the processing power of computer chips would double every 18 to
    24 months. Since the 1970s, his prediction has been known as Moore\'s
    Law and has essentially been correct. This technical development has
    indeed taken place exponentially, not least because the semi-conductor
    industry has been oriented around
    it.[^84^](#c2-note-0084){#c2-note-0084a} An IBM 360/40 mainframe
    computer, which was one of the first of its kind to be produced on a
    large scale, could make approximately 40,000 calculations per second and
    its cost, when it was introduced to the market in 1965, was \$1.5
    million per unit. Just 40 years later, a standard server (with a
    quad-core Intel processor) could make more than 40 billion calculations
    per second, and this at a price of little more than \$1,500. This
    amounts to an increase in performance by a factor of a million and a
    corresponding price reduction by a factor of a thousand; that is, an
    improvement in the price-to-performance ratio by a factor of a billion.
    With inflation taken into consideration, this factor would be even
    higher. No less dramatic were the increases in performance -- or rather
    []{#Page_106 type="pagebreak" title="106"}the price reductions -- in the
    area of data storage. In 1980, it cost more than \$400,000 to store a
    gigabyte of data, whereas 30 years later it would cost just 10 cents to
    do the same -- a price reduction by a factor of 4 million. And in both
    areas, this development has continued without pause.

    These increases in performance have formed the material basis for the
    rapidly growing number of activities carried out by means of algorithms.
    We have now reached a point where Leibniz\'s distinction between
    creative mental functions and "simple calculations" is becoming
    increasingly fuzzy. Recent discussions about the allegedly threatening
    "domination of the computer" have been kindled less by the increased use
    of algorithms as such than by the gradual blurring of this distinction
    with new possibilities to formalize and mechanize increasing areas of
    creative thinking.[^85^](#c2-note-0085){#c2-note-0085a} Activities that
    not long ago were reserved for human intelligence, such as composing
    texts or analyzing the content of images, are now frequently done by
    machines. As early as 2010, a program called Stats Monkey was introduced
    to produce short reports about baseball games. All that the program
    needs for this is comprehensive data about the games, which can be
    accumulated mechanically and which have since become more detailed due
    to improved image recognition and sensors. From these data, the program
    extracts the decisive moments and players of a game, recognizes
    characteristic patterns throughout the course of play (such as
    "extending an early lead," "a dramatic comeback," etc.), and on this
    basis generates its own report. Regarding the reports themselves, a
    number of variables can be determined in advance, for instance whether
    the story should be written from the perspective of a neutral observer
    or from the standpoint of one of the two teams. If writing about little
    league games, the program can be instructed to ignore the errors made by
    children -- because no parent wants to read about those -- and simply
    focus on their heroics. The algorithm was soon patented, and a start-up
    business was created from the original interdisciplinary research
    project: Narrative Science. In addition to sport reports it now offers
    texts of all sorts, but above all financial reports -- another field for
    which there is a great deal of available data. These texts have been
    published by reputable media outlets such as the business magazine
    *Forbes*, in which their authorship []{#Page_107 type="pagebreak"
    title="107"}is credited to "Narrative Science." Although these
    contributions are still limited to relatively simple topics, this will
    not remain the case for long. When asked about the percentage of news
    that would be written by computers 15 years from now, Narrative
    Science\'s chief technology officer and co-founder Kristian Hammond
    confidently predicted "\[m\]ore than 90 percent." He added that, within
    the next five years, an algorithm could even win a Pulitzer
    Prize.[^86^](#c2-note-0086){#c2-note-0086a} This may be blatant hype and
    self-promotion but, as a general estimation, Hammond\'s assertion is not
    entirely beyond belief. It remains to be seen whether algorithms will
    replace or simply supplement traditional journalism. Yet because media
    companies are now under strong financial pressure, it is certainly
    reasonable to predict that many journalistic texts will be automated in
    the future. Entirely different applications, however, have also been
    conceived. Alexander Pschera, for instance, foresees a new age in the
    relationship between humans and nature, for, as soon as animals are
    equipped with transmitters and sensors and are thus able to tell their
    own stories through the appropriate software, they will be regarded as
    individuals and not merely as generic members of a
    species.[^87^](#c2-note-0087){#c2-note-0087a}

    We have not yet reached this point. However, given that the CIA has also
    expressed interest in Narrative Science and has invested in it through
    its venture-capital firm In-Q-Tel, there are indications that
    applications are being developed beyond the field of journalism. For the
    purpose of spreading propaganda, for instance, algorithms can easily be
    used to create a flood of entries on online forums and social mass
    media.[^88^](#c2-note-0088){#c2-note-0088a} Narrative Science is only
    one of many companies offering automated text analysis and production.
    As implemented by IBM and other firms, so-called E-discovery software
    promises to reduce dramatically the amount of time and effort required
    to analyze the constantly growing numbers of files that are relevant to
    complex legal cases. Without such software, it would be impossible in
    practice for lawyers to deal with so many documents. Numerous bots
    (automated editing programs) are active in the production of Wikipedia
    as well. Whereas, in the German edition, bots are forbidden from writing
    their own articles, this is not the case in the Swedish version.
    Measured by the number of entries, the latter is now the second-largest
    edition of the online encyclopedia in the []{#Page_108 type="pagebreak"
    title="108"}world, for, in the summer of 2013, a single bot contributed
    more than 200,000 articles to it.[^89^](#c2-note-0089){#c2-note-0089a}
    Since 2013, moreover, the company Epagogix has offered software that
    uses histor­ical data to evaluate the market potential of film scripts.
    At least one major Hollywood studio uses this software behind the backs
    of scriptwriters and directors, for, according to the company\'s CEO,
    the latter would be "nervous" to learn that their creative work was
    being analyzed in such a way.[^90^](#c2-note-0090){#c2-note-0090a}
    Think, too, of the typical statement that is made at the beginning of a
    call to a telephone hotline -- "This call may be recorded for training
    purposes." Increasingly, this training is not intended for the employees
    of the call center but rather for algorithms. The latter are expected to
    learn how to recognize the personality type of the caller and, on that
    basis, to produce an appropriate script to be read by its poorly
    educated and part-time human
    co-workers.[^91^](#c2-note-0091){#c2-note-0091a} Another example is the
    use of algorithms to grade student
    essays,[^92^](#c2-note-0092){#c2-note-0092a} or ... But there is no need
    to expand this list any further. Even without additional references to
    comparable developments in the fields of image, sound, language, and
    film analysis, it is clear by now that, on many fronts, the borders
    between the creative and the mechanical have
    shifted.[^93^](#c2-note-0093){#c2-note-0093a}
    :::

    ::: {.section}
    ### Dynamic algorithms {#c2-sec-0021}

    The algorithms used for such tasks, however, are no longer simple
    sequences of static instructions. They are no longer repeated unchanged,
    over and over again, but are dynamic and adaptive to a high degree. The
    computing power available today is used to write programs that modify
    and improve themselves semi-automatically and in response to feedback.

    What this means can be illustrated by the example of evolutionary and
    self-learning algorithms. An evolutionary algorithm is developed in an
    iterative process that continues to run until the desired result has
    been achieved. In most cases, the values of the variables of the first
    generation of algorithms are chosen at random in order to diminish the
    influence of the programmer\'s presuppositions on the results. These
    cannot be avoided entirely, however, because the type of variables
    (independent of their value) has to be determined in the first place. I
    will return to this problem later on. This is []{#Page_109
    type="pagebreak" title="109"}followed by a phase of evaluation: the
    output of every tested algorithm is evaluated according to how close it
    is to the desired solution. The best are then chosen and combined with
    one another. In addition, mutations (that is, random changes) are
    introduced. These steps are then repeated as often as necessary until,
    according to the specifications in question, the algorithm is
    "sufficient" or cannot be improved any further. By means of intensive
    computational processes, algorithms are thus "cultivated"; that is,
    large numbers of these are tested instead of a single one being designed
    analytically and then implemented. At the heart of this pursuit is a
    functional solution that proves itself experimentally and in practice,
    but about which it might no longer be possible to know why it functions
    or whether it actually is the best possible solution. The fundamental
    methods behind this process largely derive from the 1970s (the first
    stage of artificial intelligence), the difference being that today they
    can be carried out far more effectively. One of the best-known examples
    of an evolutionary algorithm is that of Google Flu Trends. In order to
    predict which regions will be especially struck by the flu in a given
    year, it evaluates the geographic distribution of internet searches for
    particular terms ("cold remedies," for instance). To develop the
    program, Google tested 450 million different models until one emerged
    that could reliably identify local flu epidemics one to two weeks ahead
    of the national health authorities.[^94^](#c2-note-0094){#c2-note-0094a}

    In pursuits of this magnitude, the necessary processes can only be
    administered by computer programs. The series of tests are no longer
    conducted by programmers but rather by algorithms. In short, algorithms
    are implemented in order to write new algorithms or determine their
    variables. If this reflexive process, in turn, is built into an
    algorithm, then the latter becomes "self-learning": the programmers do
    not set the rules for its execution but rather the rules according to
    which the algorithm is supposed to know how to accomplish a particular
    goal. In many cases, the solution strategies are so complex that they
    are incomprehensible in retrospect. They can no longer be tested
    logically, only experimentally. Such algorithms are essentially black
    boxes -- objects that can only be understood by their outer behavior but
    whose internal structure cannot be known.[]{#Page_110 type="pagebreak"
    title="110"}

    Automatic facial recognition, as used in surveillance technologies and
    for authorizing access to certain things, is based on the fact that
    computers can evaluate large numbers of facial images, first to produce
    a general model for a face, then to identify the variables that make a
    face unique and therefore recognizable. With so-called "unsupervised" or
    "deep-learning" algorithms, some developers and companies have even
    taken this a step further: computers are expected to extract faces from
    unstructured images -- that is, from volumes of images that contain
    images both with faces and without them -- and to do so without
    possessing in advance any model of the face in question. So far, the
    extraction and evaluation of unknown patterns from unstructured material
    has only been achieved in the case of very simple patterns -- with edges
    or surfaces in images, for instance -- for it is extremely complex and
    computationally intensive to program such learning processes. In recent
    years, however, there have been enormous leaps in available computing
    power, and both the data inputs and the complexity of the learning
    models have increased exponentially. Today, on the basis of simple
    patterns, algorithms are developing improved recognition of the complex
    content of images. They are refining themselves on their own. The term
    "deep learning" is meant to denote this very complexity. In 2012, Google
    was able to demonstrate the performance capacity of its new programs in
    an impressive manner: from a collection of randomly chosen YouTube
    videos, analyzed in a cluster by 1,000 computers with 16,000 processors,
    it was possible to create a model in just three days that increased
    facial recognition in unstructured images by 70
    percent.[^95^](#c2-note-0095){#c2-note-0095a} Of course, the algorithm
    does not "know" what a face is, but it reliably recognizes a class of
    forms that humans refer to as a face. One advantage of a model that is
    not created on the basis of prescribed parameters is that it can also
    identify faces in non-standard situ­ations (for instance if a person is
    in the background, if a face is half-concealed, or if it has been
    recorded at a sharp angle). Thanks to this technique, it is possible to
    search the content of images directly and not, as before, primarily by
    searching their descriptions. Such algorithms are also being used to
    identify people in images and to connect them in social networks with
    the profiles of the people in question, and this []{#Page_111
    type="pagebreak" title="111"}without any cooperation from the users
    themselves. Such algorithms are also expected to assist in directly
    controlling activity in "unstructured" reality, for instance in
    self-driving cars or other autonomous mobile applications that are of
    great interest to the military in particular.

    Algorithms of this sort can react and adjust themselves directly to
    changes in the environment. This feedback, however, also shortens the
    timeframe within which they are able to generate repetitive and
    therefore predictable results. Thus, algorithms and their predictive
    powers can themselves become unpredictable. Stock markets have
    frequently experi­enced so-called "sub-second extreme events"; that is,
    price fluctuations that happen in less than a
    second.[^96^](#c2-note-0096){#c2-note-0096a} Dramatic "flash crashes,"
    however, such as that which occurred on May 6, 2010, when the Dow Jones
    Index dropped almost a thousand points in a few minutes (and was thus
    perceptible to humans), have not been terribly
    uncommon.[^97^](#c2-note-0097){#c2-note-0097a} With the introduction of
    voice commands on mobile phones (Apple\'s Siri, for example, which came
    out in 2011), programs based on self-learning algorithms have now
    reached the public at large and have infiltrated increased areas of
    everyday life.
    :::

    ::: {.section}
    ### Sorting, ordering, extracting {#c2-sec-0022}

    Orders generated by algorithms are a constitutive element of the digital
    condition. On the one hand, the mechanical pre-sorting of the
    (informational) world is a precondition for managing immense and
    unstructured amounts of data. On the other hand, these large amounts of
    data and the computing centers in which they are stored and processed
    provide the material precondition for developing increasingly complex
    algorithms. Necessities and possibilities are mutually motivating one
    another.[^98^](#c2-note-0098){#c2-note-0098a}

    Perhaps the best-known algorithms that sort the digital infosphere and
    make it usable in its present form are those of search engines, above
    all Google\'s PageRank. Thanks to these, we can find our way around in a
    world of unstructured information and transfer increasingly larger parts
    of the (informational) world into the order of unstructuredness without
    giving rise to the "Library of Babel." Here, "unstructured" means that
    there is no prescribed order such as (to stick []{#Page_112
    type="pagebreak" title="112"}with the image of the library) a cataloging
    system that assigns to each book a specific place on a shelf. Rather,
    the books are spread all over the place and are dynamically arranged,
    each according to a search, so that the appropriate books for each
    visitor are always standing ready at the entrance. Yet the metaphor of
    books being strewn all about is problematic, for "unstructuredness" does
    not simply mean the absence of any structure but rather the presence of
    another type of order -- a meta-structure, a potential for order -- out
    of which innumerable specific arrangements can be generated on an ad hoc
    basis. This meta-structure is created by algorithms. They subsequently
    derive from it an actual order, which the user encounters, for instance,
    when he or she scrolls through a list of hits produced by a search
    engine. What the user does not see are the complex preconditions for
    assembling the search results. By the middle of 2014, according to the
    company\'s own information, the Google index alone included more than a
    hundred million gigabytes of data.

    Originally (that is, in the second half of the 1990s), Page­Rank
    functioned in such a way that the algorithm analyzed the structure of
    links on the World Wide Web, first by noting the number of links that
    referred to a given document, and second by evaluating the "relevance"
    of the site that linked to the document in question. The relevance of a
    site, in turn, was determined by the number of links that led to it.
    From these two variables, every document registered by the search engine
    was assigned a value, the PageRank. The latter served to present the
    documents found with a given search term as a hierarchical list (search
    results), whereby the document with the highest value was listed
    first.[^99^](#c2-note-0099){#c2-note-0099a} This algorithm was extremely
    successful because it reduced the unfathomable chaos of the World Wide
    Web to a task that could be managed without difficulty by an individual
    user: inputting a search term and selecting from one of the presented
    "hits." The simplicity of the user\'s final choice, together with the
    quality of the algorithmic pre-selection, quickly pushed Google past its
    competition.

    Underlying this process is the assumption that every link is an
    indication of relevance, and that links from frequently linked (that is,
    popular) sources are more important than those from less frequently
    linked (that is, unpopular) sources. []{#Page_113 type="pagebreak"
    title="113"}The advantage of this assumption is that it can be
    understood in terms of purely quantitative variables and it is not
    necessary to have any direct understanding of a document\'s content or
    of the context in which it exists.

    In the middle of the 1990s, when the first version of the PageRank
    algorithm was developed, the problem of judging the relevance of
    documents whose content could only partially be evaluated was not a new
    one. Science administrators at universities and funding agencies had
    been facing this difficulty since the 1950s. During the rise of the
    knowledge economy, the number of scientific publications increased
    rapidly. Scientific fields, perspectives, and methods also multiplied
    and diversified during this time, so that even experts could not survey
    all of the work being done in their own areas of
    research.[^100^](#c2-note-0100){#c2-note-0100a} Thus, instead of reading
    and evaluating the content of countless new publications, they shifted
    their analysis to a higher level of abstraction. They began to count how
    often an article or book was cited and applied this information to
    assess the value of a given author or
    publication.[^101^](#c2-note-0101){#c2-note-0101a} The underlying
    assumption was (and remains) that only important things are referenced,
    and therefore every citation and every reference can be regarded as an
    indirect vote for something\'s relevance.

    In both cases -- classifying a chaotic sphere of information and
    administering an expanding industry of knowledge -- the challenge is to
    develop dynamic orders for rapidly changing fields, enabling the
    evaluation of the importance of individual documents without knowledge
    of their content. Because the analysis of citations or links operates on
    a purely quantitative basis, large amounts of data can be quickly
    structured with them, and especially relevant positions can be
    determined. The second advantage of this approach is that it does not
    require any assumptions about the contours of different fields or their
    relationships to one another. This enables the organ­ization of
    disordered or dynamic content. In both cases, references made by the
    actors themselves are used: citations in a scientific text, links on
    websites. Their value for establishing the order of a field as a whole,
    however, is only visible in the aggregate, for instance in the frequency
    with which a given article is
    cited.[^102^](#c2-note-0102){#c2-note-0102a} In both cases, the shift
    from analyzing "data" (the content of documents in the traditional
    sense) to []{#Page_114 type="pagebreak" title="114"}analyzing
    "meta-data" (describing documents in light of their relationships to one
    another) is a precondition for being able to make any use at all of
    growing amounts of information.[^103^](#c2-note-0103){#c2-note-0103a}
    This shift introduced a new level of abstraction. Information is no
    longer understood as a representation of external reality; its
    significance is not evaluated with regard to the relation between
    "information" and "the world," for instance with a qualitative criterion
    such as "true"/"false." Rather, the sphere of information is treated as
    a self-referential, closed world, and documents are accordingly only
    evaluated in terms of their position within this world, though with
    quantitative criteria such as "central"/"peripheral."

    Even though the PageRank algorithm was highly effective and assisted
    Google\'s rapid ascent to a market-leading position, at the beginning it
    was still relatively simple and its mode of operation was at least
    partially transparent. It followed the classical statistical model of an
    algorithm. A document or site referred to by many links was considered
    more important than one to which fewer links
    referred.[^104^](#c2-note-0104){#c2-note-0104a} The algorithm analyzed
    the given structural order of information and determined the position of
    every document therein, and this was largely done independently of the
    context of the search and without making any assumptions about it. This
    approach functioned relatively well as long as the volume of information
    did not exceed a certain size, and as long as the users and their
    searches were somewhat similar to one another. In both respects, this is
    no longer the case. The amount of information to be pre-sorted is
    increasing, and users are searching in all possible situations and
    places for everything under the sun. At the time Google was founded, no
    one would have thought to check the internet, quickly and while on
    one\'s way, for today\'s menu at the restaurant round the corner. Now,
    thanks to smartphones, this is an obvious thing to do.
    :::

    ::: {.section}
    ### Algorithm clouds {#c2-sec-0023}

    In order to react to such changes in user behavior -- and simultaneously
    to advance it further -- Google\'s search algorithm is constantly being
    modified. It has become increasingly complex and has assimilated a
    greater amount of contextual []{#Page_115 type="pagebreak"
    title="115"}information, which influences the value of a site within
    Page­Rank and thus the order of search results. The algorithm is no
    longer a fixed object or unchanging recipe but is transforming into a
    dynamic process, an opaque cloud composed of multiple interacting
    algorithms that are continuously refined (between 500 and 600 times a
    year, according to some estimates). These ongoing developments are so
    extensive that, since 2003, several new versions of the algorithm cloud
    have appeared each year with their own names. In 2014 alone, Google
    carried out 13 large updates, more than ever
    before.[^105^](#c2-note-0105){#c2-note-0105a}

    These changes continue to bring about new levels of abstraction, so that
    the algorithm takes into account add­itional variables such as the time
    and place of a search, alongside a person\'s previously recorded
    behavior -- but also his or her involvement in social environments, and
    much more. Personalization and contextualization were made part of
    Google\'s search algorithm in 2005. At first it was possible to choose
    whether or not to use these. Since 2009, however, they have been a fixed
    and binding component for everyone who conducts a search through
    Google.[^106^](#c2-note-0106){#c2-note-0106a} By the middle of 2013, the
    search algorithm had grown to include at least 200
    variables.[^107^](#c2-note-0107){#c2-note-0107a} What is relevant is
    that the algorithm no longer determines the position of a document
    within a dynamic informational world that exists for everyone
    externally. Instead, it now assigns a rank to their content within a
    dynamic and singular universe of information that is tailored to every
    individual user. For every person, an entirely different order is
    created instead of just an excerpt from a previously existing order. The
    world is no longer being represented; it is generated uniquely for every
    user and then presented. Google is not the only company that has gone
    down this path. Orders produced by algorithms have become increasingly
    oriented toward creating, for each user, his or her own singular world.
    Facebook, dating services, and other social mass media have been
    pursuing this approach even more radically than Google.
    :::

    ::: {.section}
    ### From the data shadow to the synthetic profile {#c2-sec-0024}

    This form of generating the world requires not only detailed information
    about the external world (that is, the reality []{#Page_116
    type="pagebreak" title="116"}shared by everyone) but also information
    about every individual\'s own relation to the
    latter.[^108^](#c2-note-0108){#c2-note-0108a} To this end, profiles are
    established for every user, and the more extensive they are, the better
    they are for the algorithms. A profile created by Google, for instance,
    identifies the user on three levels: as a "knowledgeable person" who is
    informed about the world (this is established, for example, by recording
    a person\'s searches, browsing behavior, etc.), as a "physical person"
    who is located and mobile in the world (a component established, for
    example, by tracking someone\'s location through a smartphone, sensors
    in a smart home, or body signals), and as a "social person" who
    interacts with other people (a facet that can be determined, for
    instance, by following someone\'s activity on social mass
    media).[^109^](#c2-note-0109){#c2-note-0109a}

    Unlike the situation in the 1990s, however, these profiles are no longer
    simply representations of singular people -- they are not "digital
    personas" or "data shadows." They no longer represent what is
    conventionally referred to as "individuality," in the sense of a
    spatially and temporally uniform identity. On the one hand, profiles
    rather consist of sub-individual elements -- of fragments of recorded
    behavior that can be evaluated on the basis of a particular search
    without promising to represent a person as a whole -- and they consist,
    on the other hand, of clusters of multiple people, so that the person
    being modeled can simultaneously occupy different positions in time.
    This temporal differentiation enables predictions of the following sort
    to be made: a person who has already done *x* will, with a probability
    of *y*, go on to engage in activity *z*. It is in this way that Amazon
    assembles its book recommendations, for the company knows that, within
    the cluster of people that constitutes part of every person\'s profile,
    a certain percentage of them have already gone through this sequence of
    activity. Or, as the data-mining company Science Rockstars (!) once
    pointedly expressed on its website, "Your next activity is a function of
    the behavior of others and your own past."

    Google and other providers of algorithmically generated orders have been
    devoting increased resources to the prognostic capabilities of their
    programs in order to make the confusing and potentially time-consuming
    step of the search obsolete. The goal is to minimize a rift that comes
    to light []{#Page_117 type="pagebreak" title="117"}in the act of
    searching, namely that between the world as everyone experiences it --
    plagued by uncertainty, for searching implies "not knowing something" --
    and the world of algorithmically generated order, in which certainty
    prevails, for everything has been well arranged in advance. Ideally,
    questions should be answered before they are asked. The first attempt by
    Google to eliminate this rift is called Google Now, and its slogan is
    "The right information at just the right time." The program, which was
    originally developed as an app but has since been made available on
    Chrome, Google\'s own web browser, attempts to anticipate, on the basis
    of existing data, a user\'s next step, and to provide the necessary
    information before it is searched for in order that such steps take
    place efficiently. Thus, for instance, it draws upon information from a
    user\'s calendar in order to figure out where he or she will have to go
    next. On the basis of real-time traffic data, it will then suggest the
    optimal way to get there. For those driving cars, the amount of traffic
    on the road will be part of the equation. This is ascertained by
    analyzing the motion profiles of other drivers, which will allow the
    program to determine whether the traffic is flowing or stuck in a jam.
    If enough historical data is taken into account, the hope is that it
    will be possible to redirect cars in such a way that traffic jams should
    no longer occur.[^110^](#c2-note-0110){#c2-note-0110a} For those who use
    public transport, Google Now evaluates real-time data about the
    locations of various transport services. With this information, it will
    suggest the optimal route and, depending on the calculated travel time,
    it will send a reminder (sometimes earlier, sometimes later) when it is
    time to go. That which Google is just experimenting with and testing in
    a limited and unambiguous context is already part of Facebook\'s
    everyday operations. With its EdgeRank algorithm, Facebook already
    organizes everyone\'s newsfeed, entirely in the background and without
    any explicit user interaction. On the basis of three variables -- user
    affinity (previous interactions between two users), content weight (the
    rate of interaction between all users and a specific piece of content),
    and currency (the age of a post) -- the algorithm selects content from
    the status updates made by one\'s friends to be displayed on one\'s own
    page.[^111^](#c2-note-0111){#c2-note-0111a} In this way, Facebook
    ensures that the stream of updates remains easy to scroll through, while
    also -- it is safe []{#Page_118 type="pagebreak" title="118"}to assume
    -- leaving enough room for advertising. This potential for manipulation,
    which algorithms possess as they work away in the background, will be
    the topic of my next section.
    :::

    ::: {.section}
    ### Variables and correlations {#c2-sec-0025}

    Every complex algorithm contains a multitude of variables and usually an
    even greater number of ways to make connections between them. Every
    variable and every relation, even if they are expressed in technical or
    mathematical terms, codifies assumptions that express a specific
    position in the world. There can be no purely descriptive variables,
    just as there can be no such thing as "raw
    data."[^112^](#c2-note-0112){#c2-note-0112a} Both -- data and variables
    -- are always already "cooked"; that is, they are engendered through
    cultural operations and formed within cultural
    categories.[^113^](#c2-note-0113){#c2-note-0113a} With every use of
    produced data and with every execution of an algorithm, the assumptions
    embedded in them are activated, and the positions contained within them
    have effects on the world that the algorithm generates and presents.

    As already mentioned, the early version of the PageRank algorithm was
    essentially based on the rather simple assumption that frequently linked
    content is more relevant than content that is only seldom linked to, and
    that links to sites that are themselves frequently linked to should be
    given more weight than those found on sites with fewer links to them.
    Replacing the qualitative criterion of "relevance" with the quantitative
    criterion of "popularity" not only proved to be tremendously practical
    but also extremely consequential, for search engines not only describe
    the world; they create it as well. That which search engines put at the
    top of this list is not just already popular but will remain so. A third
    of all users click on the first search result, and around 95 percent do
    not look past the first 10.[^114^](#c2-note-0114){#c2-note-0114a} Even
    the earliest version of the PageRank algorithm did not represent
    existing reality but rather (co-)constituted it.

    Popularity, however, is not the only element with which algorithms
    actively give shape to the user\'s world. A search engine can only sort,
    weigh, and make available that portion of information which has already
    been incorporated into its index. Everything else remains invisible. The
    relation between []{#Page_119 type="pagebreak" title="119"}the recorded
    part of the internet (the "surface web") and the unrecorded part (the
    "deep web") is difficult to determine. Estimates have varied between
    ratios of 1:5 and 1:500.[^115^](#c2-note-0115){#c2-note-0115a} There are
    many reasons why content might be inaccessible to search engines.
    Perhaps the information has been saved in formats that search engines
    cannot read or can only poorly read, or perhaps it has been hidden
    behind proprietary barriers such as paywalls. In order to expand the
    realm of things that can be exploited by their algorithms, the operators
    of search engines offer extensive guidance about how providers should
    design their sites so that search tools can find them in an optimal
    manner. It is not necessary to follow this guidance, but given the
    central role of search engines in sorting and filtering information, it
    is clear that they exercise a great deal of power by setting the
    standards.[^116^](#c2-note-0116){#c2-note-0116a}

    That the individual must "voluntarily" submit to this authority is
    typical of the power of networks, which do not give instructions but
    rather constitute preconditions. Yet it is in the interest of (almost)
    every producer of information to optimize its position in a search
    engine\'s index, and thus there is a strong incentive to accept the
    preconditions in question. Considering, moreover, the nearly
    monopolistic character of many providers of algorithmically generated
    orders and the high price that one would have to pay if one\'s own site
    were barely (or not at all) visible to others, the term "voluntary"
    begins to take on a rather foul taste. This is a more or less subtle way
    of pre-formatting the world so that it can be optimally recorded by
    algorithms.[^117^](#c2-note-0117){#c2-note-0117a}

    The providers of search engines usually justify such methods in the name
    of offering "more efficient" services and "more relevant" results.
    Ostensibly technical and neutral terms such as "efficiency" and
    "relevance" do little, however, to conceal the political nature of
    defining variables. Efficient with respect to what? Relevant for whom?
    These are issues that are decided without much discussion by the
    developers and institutions that regard the algorithms as their own
    property. Every now and again such questions incite public debates,
    mostly when the interests of one provider happen to collide with those
    of its competition. Thus, for instance, the initiative known as
    FairSearch has argued that Google abuses its market power as a search
    engine to privilege its []{#Page_120 type="pagebreak" title="120"}own
    content and thus to showcase it prominently in search
    results.[^118^](#c2-note-0118){#c2-note-0118a} FairSearch\'s
    representatives alleged, for example, that Google favors its own map
    service in the case of address searches and its own price comparison
    service in the case of product searches. The argument had an effect. In
    November of 2010, the European Commission initiated an antitrust
    investigation against Google. In 2014, a settlement was proposed that
    would have required the American internet giant to pay certain
    concessions, but the members of the Commission, the EU Parliament, and
    consumer protection agencies were not satisfied with the agreement. In
    April 2015, the anti-trust proceedings were recommenced by a newly
    appointed Commission, its reasoning being that "Google does not apply to
    its own comparison shopping service the system of penalties which it
    applies to other comparison shopping services on the basis of defined
    parameters, and which can lead to the lowering of the rank in which they
    appear in Google\'s general search results
    pages."[^119^](#c2-note-0119){#c2-note-0119a} In other words, the
    Commission accused the company of manipulating search results to its own
    advantage and the disadvantage of users.

    This is not the only instance in which the political side of search
    algorithms has come under public scrutiny. In the summer of 2012, Google
    announced that sites with higher numbers of copyright removal notices
    would henceforth appear lower in its
    rankings.[^120^](#c2-note-0120){#c2-note-0120a} The company thus
    introduced explicitly political and economic criteria in order to
    influence what, according to the standards of certain powerful players
    (such as film studios), users were able to
    view.[^121^](#c2-note-0121){#c2-note-0121a} In this case, too, it would
    be possible to speak of the personalization of searching, except that
    the heart of the situation was not the natural person of the user but
    rather the juridical person of the copyright holder. It was according to
    the latter\'s interests and preferences that searching was being
    reoriented. Amazon has employed similar tactics. In 2014, the online
    merchant changed its celebrated recommendation algorithm with the goal
    of reducing the presence of books released by irritating publishers that
    dared to enter into price negotiations with the
    company.[^122^](#c2-note-0122){#c2-note-0122a}

    Controversies over the methods of Amazon or Google, however, are the
    exception rather than the rule. Necessary (but never neutral) decisions
    about recording and evaluating data []{#Page_121 type="pagebreak"
    title="121"}with algorithms are being made almost all the time without
    any discussion whatsoever. The logic of the original Page­Rank algorithm
    was criticized as early as the year 2000 for essentially representing
    the commercial logic of mass media, systematically disadvantaging
    less-popular though perhaps otherwise relevant information, and thus
    undermining the "substantive vision of the web as an inclusive
    democratic space."[^123^](#c2-note-0123){#c2-note-0123a} The changes to
    the search algorithm that have been adopted since then may have modified
    this tendency, but they have certainly not weakened it. In addition to
    concentrating on what is popular, the new variables privilege recently
    uploaded and constantly updated content. The selection of search results
    is now contingent upon the location of the user, and it takes into
    account his or her social networking. It is oriented toward the average
    of a dynamically modeled group. In other words, Google\'s new algorithm
    favors that which is gaining popularity within a user\'s social network.
    The global village is thus becoming more and more
    provincial.[^124^](#c2-note-0124){#c2-note-0124a}
    :::

    ::: {.section}
    ### Data behaviorism {#c2-sec-0026}

    Algorithms such as Google\'s thus reiterate and reinforce a tendency
    that has already been apparent on both the level of individual users and
    that of communal formations: in order to deal with the vast amounts and
    complexity of information, they direct their gaze inward, which is not
    to say toward the inner being of individual people. As a level of
    reference, the individual person -- with an interior world and with
    ideas, dreams, and wishes -- is irrelevant. For algorithms, people are
    black boxes that can only be understood in terms of their reactions to
    stimuli. Consciousness, perception, and intention do not play any role
    for them. In this regard, the legal philosopher Antoinette Rouvroy has
    written about "data behaviorism."[^125^](#c2-note-0125){#c2-note-0125a}
    With this, she is referring to the gradual return of a long-discredited
    approach to behavioral psychology that postulated that human behavior
    could be explained, predicted, and controlled purely by our outwardly
    observable and measurable actions.[^126^](#c2-note-0126){#c2-note-0126a}
    Psychological dimensions were ignored (and are ignored in this new
    version of behaviorism) because it is difficult to observe them
    empiric­ally. Accordingly, this approach also did away with the need
    []{#Page_122 type="pagebreak" title="122"}to question people directly or
    take into account their subjective experiences, thoughts, and feelings.
    People were regarded (and are so again today) as unreliable, as poor
    judges of themselves, and as only partly honest when disclosing
    information. Any strictly empirical science, or so the thinking went,
    required its practitioners to disregard everything that did not result
    in physical and observable action. From this perspective, it was
    possible to break down even complex behavior into units of stimulus and
    reaction. This led to the conviction that someone observing another\'s
    activity always knows more than the latter does about himself or herself
    for, unlike the person being observed, whose impressions can be
    inaccurate, the observer is in command of objective and complete
    information. Even early on, this approach faced a wave of critique. It
    was held to be mechanistic, reductionist, and authoritarian because it
    privileged the observing scientist over the subject. In practice, it
    quickly ran into its own limitations: it was simply too expensive and
    complicated to gather data about human behavior.

    Yet that has changed radically in recent years. It is now possible to
    measure ever more activities, conditions, and contexts empirically.
    Algorithms like Google\'s or Amazon\'s form the technical backdrop for
    the revival of a mechanistic, reductionist, and authoritarian approach
    that has resurrected the long-lost dream of an objective view -- the
    view from nowhere.[^127^](#c2-note-0127){#c2-note-0127a} Every critique
    of this positivistic perspective -- that every measurement result, for
    instance, reflects not only the measured but also the measurer -- is
    brushed aside with reference to the sheer amounts of data that are now
    at our disposal.[^128^](#c2-note-0128){#c2-note-0128a} This attitude
    substantiates the claim of those in possession of these new and
    comprehensive powers of observation (which, in addition to Google and
    Facebook, also includes the intelligence services of Western nations),
    namely that they know more about individuals than individuals know about
    themselves, and are thus able to answer our questions before we ask
    them. As mentioned above, this is a goal that Google expressly hopes to
    achieve.

    At issue with this "inward turn" is thus the space of communal
    formations, which is constituted by the sum of all of the activities of
    their interacting participants. In this case, however, a communal
    formation is not consciously created []{#Page_123 type="pagebreak"
    title="123"}and maintained in a horizontal process, but rather
    synthetic­ally constructed as a computational function. Depending on the
    context and the need, individuals can either be assigned to this
    function or removed from it. All of this happens behind the user\'s back
    and in accordance with the goals and pos­itions that are relevant to the
    developers of a given algorithm, be it to optimize profit or
    surveillance, create social norms, improve services, or whatever else.
    The results generated in this way are sold to users as a personalized
    and efficient service that provides a quasi-magical product. Out of the
    enormous haystack of searchable information, results are generated that
    are made to seem like the very needle that we have been looking for. At
    best, it is only partially transparent how these results came about and
    which positions in the world are strengthened or weakened by them. Yet,
    as long as the needle is somewhat functional, most users are content,
    and the algorithm registers this contentedness to validate itself. In
    this dynamic world of unmanageable complexity, users are guided by a
    sort of radical, short-term pragmatism. They are happy to have the world
    pre-sorted for them in order to improve their activity in it. Regarding
    the matter of whether the information being provided represents the
    world accurately or not, they are unable to formulate an adequate
    assessment for themselves, for it is ultimately impossible to answer
    this question without certain resources. Outside of rapidly shrinking
    domains of specialized or everyday know­ledge, it is becoming
    increasingly difficult to gain an overview of the world without
    mechanisms that pre-sort it. Users are only able to evaluate search
    results pragmatically; that is, in light of whether or not they are
    helpful in solving a concrete problem. In this regard, it is not
    paramount that they find the best solution or the correct answer but
    rather one that is available and sufficient. This reality lends an
    enormous amount of influence to the institutions and processes that
    provide the solutions and answers.[]{#Page_124 type="pagebreak"
    title="124"}
    :::
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#c2-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#c2-note-0001a){#c2-note-0001}  André Rottmann, "Reflexive Systems
    of Reference: Approximations to 'Referentialism' in Contemporary Art,"
    trans. Gerrit Jackson, in Dirk Snauwaert et al. (eds), *Rehabilitation:
    The Legacy of the Modern Movement* (Ghent: MER, 2010), pp. 97--106, at
    99.

    [2](#c2-note-0002a){#c2-note-0002}  The recognizability of the sources
    distinguishes these processes from plagiarism. The latter operates with
    the complete opposite aim, namely that of borrowing sources without
    acknow­ledging them.

    [3](#c2-note-0003a){#c2-note-0003}  Ulf Poschardt, *DJ Culture* (London:
    Quartet Books, 1998), p. 34.

    [4](#c2-note-0004a){#c2-note-0004}  Theodor W. Adorno, *Aesthetic
    Theory*, trans. Robert Hullot-Kentor (Minneapolis, MN: University of
    Minnesota Press, 1997), p. 151.

    [5](#c2-note-0005a){#c2-note-0005}  Peter Bürger, *Theory of the
    Avant-Garde*, trans. Michael Shaw (Minneapolis, MN: University of
    Minnesota Press, 1984).

    [6](#c2-note-0006a){#c2-note-0006}  Felix Stalder, "Neun Thesen zur
    Remix-Kultur," *i-rights.info* (May 25, 2009), online.

    [7](#c2-note-0007a){#c2-note-0007}  Florian Cramer, *Exe.cut(up)able
    Statements: Poetische Kalküle und Phantasmen des selbstausführenden
    Texts* (Munich: Wilhelm Fink, 2011), pp. 9--10 \[--trans.\]

    [8](#c2-note-0008a){#c2-note-0008}  McLuhan stressed that, despite using
    the alphabet, every manuscript is unique because it not only depended on
    the sequence of letters but also on the individual ability of a given
    scribe to []{#Page_185 type="pagebreak" title="185"}lend these letters a
    particular shape. With the rise of the printing press, the alphabet shed
    these last elements of calligraphy and became typography.

    [9](#c2-note-0009a){#c2-note-0009}  Elisabeth L. Eisenstein, *The
    Printing Revolution in Early Modern Europe* (Cambridge: Cambridge
    University Press, 1983), p. 15.

    [10](#c2-note-0010a){#c2-note-0010}  Eisenstein, *The Printing
    Revolution in Early Modern Europe*, p. 204.

    [11](#c2-note-0011a){#c2-note-0011}  The fundamental aspects of these
    conventions were formulated as early as the beginning of the sixteenth
    century; see Michael Giesecke, *Der Buchdruck in der frühen Neuzeit:
    Eine historische Fallstudie über die Durchsetzung neuer Informations-
    und Kommunikationstechnologien* (Frankfurt am Main: Suhrkamp, 1991), pp.
    420--40.

    [12](#c2-note-0012a){#c2-note-0012}  Eisenstein, *The Printing
    Revolution in Early Modern Europe*, p. 49.

    [13](#c2-note-0013a){#c2-note-0013}  In April 2014, the Authors Guild --
    the association of American writers that had sued Google -- filed an
    appeal to overturn the decision and made a public statement demanding
    that a new organization be established to license the digital rights of
    out-of-print books. See "Authors Guild: Amazon was Google's Target,"
    *The Authors Guild: Industry & Advocacy News* (April 11, 2014), online.
    In October 2015, however, the next-highest authority -- the United
    States Court of Appeals for the Second Circuit -- likewise decided in
    Google\'s favor. The Authors Guild promptly announced its intention to
    take the case to the Supreme Court.

    [14](#c2-note-0014a){#c2-note-0014}  Jean-Noël Jeanneney, *Google and
    the Myth of Universal Knowledge: A View from Europe*, trans. Teresa
    Lavender Fagan (Chicago, IL: University of Chicago Press, 2007).

    [15](#c2-note-0015a){#c2-note-0015}  Within the framework of the Images
    for the Future project (2007--14), the Netherlands alone invested more
    than €170 million to digitize the collections of the most important
    audiovisual archives. Over 10 years, the cost of digitizing the entire
    cultural heritage of Europe has been estimated to be around €100
    billion. See Nick Poole, *The Cost of Digitising Europe\'s Cultural
    Heritage: A Report for the Comité des Sages of the European Commission*
    (November 2010), online.

    [16](#c2-note-0016a){#c2-note-0016}  Richard Darnton, "The National
    Digital Public Library Is Launched!", *New York Review of Books* (April
    25, 2013), online.

    [17](#c2-note-0017a){#c2-note-0017}  According to estimates by the
    British Library, so-called "orphan works" alone -- that is, works still
    legally protected but whose right holders are unknown -- make up around
    40 percent of the books in its collection that still fall under
    copyright law. In an effort to alleviate this problem, the European
    Parliament and the European Commission issued a directive []{#Page_186
    type="pagebreak" title="186"}in 2012 concerned with "certain permitted
    uses of orphan works." This has allowed libraries and archives to make
    works available online without permission if, "after carrying out
    diligent searches," the copyright holders cannot be found. What
    qualifies as a "diligent search," however, is so strictly formulated
    that the German Library Association has called the directive
    "impracticable." Deutscher Bibliotheksverband, "Rechtlinie über
    bestimmte zulässige Formen der Nutzung verwaister Werke" (February 27,
    2012), online.

    [18](#c2-note-0018a){#c2-note-0018}  UbuWeb, "Frequently Asked
    Questions," online.

    [19](#c2-note-0019a){#c2-note-0019}  The numbers in this area of
    activity are notoriously unreliable, and therefore only rough estimates
    are possible. It seems credible, however, that the Pirate Bay was
    attracting around a billion page views per month by the end of 2013.
    That would make it the seventy-fourth most popular internet destination.
    See Ernesto, "Top 10 Most Popular Torrent Sites of 2014" (January 4,
    2014), online.

    [20](#c2-note-0020a){#c2-note-0020}  See the documentary film *TPB AFK:
    The Pirate Bay Away from Keyboard* (2013), directed by Simon Klose.

    [21](#c2-note-0021a){#c2-note-0021}  In technical terms, there is hardly
    any difference between a "stream" and a "download." In both cases, a
    complete file is transferred to the user\'s computer and played.

    [22](#c2-note-0022a){#c2-note-0022}  The practice is legal in Germany
    but illegal in Austria, though digitized texts are routinely made
    available there in seminars. See Seyavash Amini Khanimani and Nikolaus
    Forgó, "Rechtsgutachten über die Erforderlichkeit einer freien
    Werknutzung im österreichischen Urheberrecht zur Privilegierung
    elektronisch unterstützter Lehre," *Forum Neue Medien Austria* (January
    2011), online.

    [23](#c2-note-0023a){#c2-note-0023}  Deutscher Bibliotheksverband,
    "Digitalisierung" (2015), online \[--trans\].

    [24](#c2-note-0024a){#c2-note-0024}  David Weinberger, *Everything Is
    Miscellaneous: The Power of the New Digital Disorder* (New York: Times
    Books, 2007).

    [25](#c2-note-0025a){#c2-note-0025}  This is not a question of material
    wealth. Those who are economically or socially marginalized are
    confronted with the same phenomenon. Their primary experience of this
    excess is with cheap goods and junk.

    [26](#c2-note-0026a){#c2-note-0026}  See Gregory Bateson, "Form,
    Substance and Difference," in Bateson, *Steps to an Ecology of Mind:
    Collected Essays in Anthropology, Psychiatry, Evolution and
    Epistemology* (London: Jason Aronson, 1972), pp. 455--71, at 460:
    "\[I\]n fact, what we mean by information -- the elementary unit of
    information -- is *a difference which makes a difference*" (the emphasis
    is original).

    [27](#c2-note-0027a){#c2-note-0027}  Inke Arns and Gabriele Horn,
    *History Will Repeat Itself* (Frankfurt am Main: Revolver, 2007), p.
    42.[]{#Page_187 type="pagebreak" title="187"}

    [28](#c2-note-0028a){#c2-note-0028}  See the film *The Battle of
    Orgreave* (2001), directed by Mike Figgis.

    [29](#c2-note-0029a){#c2-note-0029}  Theresa Winge, "Costuming the
    Imagination: Origins of Anime and Manga Cosplay," *Mechademia* 1 (2006),
    pp. 65--76.

    [30](#c2-note-0030a){#c2-note-0030}  Nicolle Lamerichs, "Stranger than
    Fiction: Fan Identity in Cosplay," *Transformative Works and Cultures* 7
    (2011), online.

    [31](#c2-note-0
  • Stalder
    The Digital Condition
    2018


    ---
    lang: en
    title: The Digital Condition
    ---

    ::: {.figure}
    []{#coverstart}

    ![Cover page](images/cover.jpg)
    :::

    Table of Contents

    1. [Preface to the English Edition](#fpref)
    2. [Acknowledgments](#ack)
    3. [Introduction: After the End of the Gutenberg Galaxy](#cintro)
    1. [Notes](#f6-ntgp-9999)
    4. [I: Evolution](#c1)
    1. [The Expansion of the Social Basis of Culture](#c1-sec-0002)
    2. [The Culturalization of the World](#c1-sec-0006)
    3. [The Technologization of Culture](#c1-sec-0009)
    4. [From the Margins to the Center of Society](#c1-sec-0013)
    5. [Notes](#c1-ntgp-9999)
    5. [II: Forms](#c2)
    1. [Referentiality](#c2-sec-0002)
    2. [Communality](#c2-sec-0009)
    3. [Algorithmicity](#c2-sec-0018)
    4. [Notes](#c2-ntgp-9999)
    6. [III: Politics](#c3)
    1. [Post-democracy](#c3-sec-0002)
    2. [Commons](#c3-sec-0011)
    3. [Against a Lack of Alternatives](#c3-sec-0017)
    4. [Notes](#c3-ntgp-9999)

    [Preface to the English Edition]{.chapterTitle} {#fpref}

    ::: {.section}
    This book posits that we in the societies of the (transatlantic) West
    find ourselves in a new condition. I call it "the digital condition"
    because it gained its dominance as computer networks became established
    as the key infrastructure for virtually all aspects of life. However,
    the emergence of this condition pre-dates computer networks. In fact, it
    has deep historical roots, some of which go back to the late nineteenth
    century, but it really came into being after the late 1960s. As many of
    the cultural and political institutions shaped by the previous condition
    -- which McLuhan called the Gutenberg Galaxy -- fell into crisis, new
    forms of personal and collective orientation and organization emerged
    which have been shaped by the affordances of this new condition. Both
    the historical processes which unfolded over a very long time and the
    structural transformation which took place in a myriad of contexts have
    been beyond any deliberate influence. Although obviously caused by
    social actors, the magnitude of such changes was simply too great, too
    distributed, and too complex to be attributed to, or molded by, any
    particular (set of) actor(s).

    Yet -- and this is the core of what motivated me to write this book --
    this does not mean that we have somehow moved beyond the political,
    beyond the realm in which identifiable actors and their projects do
    indeed shape our collective []{#Page_vii type="pagebreak"
    title="vii"}existence, or that there are no alternatives to future
    development already expressed within contemporary dynamics. On the
    contrary, we can see very clearly that as the center -- the established
    institutions shaped by the affordances of the previous condition -- is
    crumbling, more economic and political projects are rushing in to fill
    that void with new institutions that advance their competing agendas.
    These new institutions are well adapted to the digital condition, with
    its chaotic production of vast amounts of information and innovative
    ways of dealing with that.

    From this, two competing trajectories have emerged which are
    simultaneously transforming the space of the political. First, I used
    the term "post-democracy" because it expands possibilities, and even
    requirements, of (personal) participation, while ever larger aspects of
    (collective) decision-making are moved to arenas that are structurally
    disconnected from those of participation. In effect, these arenas are
    forming an authoritarian reality in which a small elite is vastly
    empowered at the expense of everyone else. The purest incarnation of
    this tendency can be seen in the commercial social mass media, such as
    Facebook, Google, and the others, as they were newly formed in this
    condition and have not (yet) had to deal with the complications of
    transforming their own legacy.

    For the other trajectory, I applied the term "commons" because it
    expands both the possibilities of personal participation and agency, and
    those of collective decision-making. This tendency points to a
    redefinition of democracy beyond the hollowed-out forms of political
    representation characterizing the legacy institutions of liberal
    democracy. The purest incarnation of this tendency can be found in the
    institutions that produce the digital commons, such as Wikipedia and the
    various Free Software communities whose work has been and still is
    absolutely crucial for the infrastructural dimensions of the digital
    networks. They are the most advanced because, again, they have not had
    to deal with institutional legacies. But both tendencies are no longer
    confined to digital networks and are spreading across all aspects of
    social life, creating a reality that is, on the structural level,
    surprisingly coherent and, on the social and political level, full of
    contradictions and thus opportunities.[]{#Page_viii type="pagebreak"
    title="viii"}

    I traced some aspects of these developments right up to early 2016, when
    the German version of this book went into production. Since then a lot
    has happened, but I resisted the temptation to update the book for the
    English translation because ideas are always an expression of their
    historical moment and, as such, updating either turns into a completely
    new version or a retrospective adjustment of the historical record.

    What has become increasingly obvious during 2016 and into 2017 is that
    central institutions of liberal democracy are crumbling more quickly and
    dramatically than was expected. The race to replace them has kicked into
    high gear. The main events driving forward an authoritarian renewal of
    politics took place on a national level, in particular the vote by the
    UK to leave the EU (Brexit) and the election of Donald Trump to the
    office of president of the United States of America. The main events
    driving the renewal of democracy took place on a metropolitan level,
    namely the emergence of a network of "rebel cities," led by Barcelona
    and Madrid. There, community-based social movements established their
    candidates in the highest offices. These cities are now putting in place
    practical examples that other cities could emulate and adapt. For the
    concerns of this book, the most important concept put forward is that of
    "technological sovereignty": to bring the technological infrastructure,
    and its developmental potential, back under the control of those who are
    using it and are affected by it; that is, the citizens of the
    metropolis.

    Over the last 18 months, the imbalances between the two trajectories
    have become even more extreme because authoritarian tendencies and
    surveillance capitalism have been strengthened more quickly than the
    commons-oriented practices could establish themselves. But it does not
    change the fact that there are fundamental alternatives embedded in the
    digital condition. Despite structural transformations that affect how we
    do things, there is no inevitability about what we want to do
    individually and, even more importantly, collectively.

    ::: {.poem}
    ::: {.lineGroup}
    Zurich/Vienna, July 2017[]{#Page_ix type="pagebreak" title="ix"}
    :::
    :::
    :::

    [Acknowledgments]{.chapterTitle} {#ack}

    ::: {.section}
    While it may be conventional to cite one person as the author of a book,
    writing is a process with many collective elements. This book in
    particular draws upon many sources, most of which I am no longer able to
    acknowledge with any certainty. Far too often, important references came
    to me in parenthetical remarks, in fleeting encounters, during trips, at
    the fringes of conferences, or through discussions of things that,
    though entirely new to me, were so obvious to others as not to warrant
    any explication. Often, too, my thinking was influenced by long
    conversations, and it is impossible for me now to identify the precise
    moments of inspiration. As far as the themes of this book are concerned,
    four settings were especially important. The international discourse
    network "nettime," which has a mailing list of 4,500 members and which I
    have been moderating since the late 1990s, represents an inexhaustible
    source of internet criticism and, as a collaborative filter, has enabled
    me to follow a wide range of developments from a particular point of
    view. I am also indebted to the Zurich University of the Arts, where I
    have taught for more than 10 years and where the students have been
    willing to explain to me, again and again, what is already self-evident
    to them. Throughout my time there, I have been able to observe a
    dramatic shift. For today\'s students, the "new" is no longer new but
    simply obvious, whereas they []{#Page_x type="pagebreak" title="x"}have
    experienced many things previously regarded as normal -- such as
    checking out a book from a library (instead of downloading it) -- as
    needlessly complicated. In Vienna, the hub of my life, the World
    Information Institute has for many years provided a platform for
    conferences, publications, and interventions that have repeatedly raised
    the stakes of the discussion and have brought together the most
    interesting range of positions without regard to any disciplinary
    boundaries. Housed in Vienna, too, is the Technopolitics Project, a
    non-institutionalized circle of researchers and artists whose
    discussions of techno-economic paradigms have informed this book in
    fundamental ways and which has offered multiple opportunities for me to
    workshop inchoate ideas.

    Not everything, however, takes place in diffuse conversations and
    networks. I was also able to rely on the generous support of several
    individuals who, at one stage or another, read through, commented upon,
    and made crucial improvements to the manuscript: Leonhard Dobusch,
    Günther Hack, Katja Meier, Florian Cramer, Cornelia Sollfrank, Beat
    Brogle, Volker Grassmuck, Ursula Stalder, Klaus Schönberger, Konrad
    Becker, Armin Medosch, Axel Stockburger, and Gerald Nestler. Special
    thanks are owed to Rebina Erben-Hartig, who edited the original German
    manuscript and greatly improved its readability. I am likewise grateful
    to Heinrich Greiselberger and Christian Heilbronn of the Suhrkamp
    Verlag, whose faith in the book never wavered despite several delays.
    Regarding the English version at hand, it has been a privilege to work
    with a translator as skillful as Valentine Pakis. Over the past few
    years, writing this book might have been the most import­ant project in
    my life had it not been for Andrea Mayr. In this regard, I have been
    especially fortunate.[]{#Page_xi type="pagebreak"
    title="xi"}[]{#Page_xii type="pagebreak" title="xii"}
    :::

    Introduction [After the End of the Gutenberg Galaxy]{.chapterTitle} []{.chapterSubTitle} {#cintro}

    ::: {.section}
    The show had already been going on for more than three hours, but nobody
    was bothered by this. Quite the contrary. The tension in the venue was
    approaching its peak, and the ratings were through the roof. Throughout
    all of Europe, 195 million people were watching the spectacle on
    television, and the social mass media were gaining steam. On Twitter,
    more than 47,000 messages were being sent every minute with the hashtag
    \#Eurovision.[^1^](#f6-note-0001){#f6-note-0001a} The outcome was
    decided shortly after midnight: Conchita Wurst, the bearded diva, was
    announced the winner of the 2014 Eurovision Song Contest. Cheers erupted
    as the public celebrated the victor -- but also itself. At long last,
    there was more to the event than just another round of tacky television
    programming ("This is Ljubljana calling!"). Rather, a statement was made
    -- a statement in favor of tolerance and against homophobia, for
    diversity and for the right to define oneself however one pleases. And
    Europe sent this message in the midst of a crisis and despite ongoing
    hostilities, not to mention all of the toxic rumblings that could be
    heard about decadence, cultural decay, and Gayropa. Visibly moved, the
    Austrian singer let out an exclamation -- "We are unity, and we are
    unstoppable!" -- as she returned to the stage with wobbly knees to
    accept the trophy.

    With her aesthetically convincing performance, Conchita succeeded in
    unleashing a strong desire for personal []{#Page_1 type="pagebreak"
    title="1"}self-discovery, for community, and for overcoming stale
    conventions. And she did this through a character that mainstream
    society would have considered paradoxical and deviant not long ago but
    has since come to understand: attractive beyond the dichotomy of man and
    woman, explicitly artificial and yet entirely authentic. This peculiar
    conflation of artificiality and naturalness is equally present in
    Berndnaut Smilde\'s photographic work of a real indoor cloud (*Nimbus*,
    2010) on the cover of this book. Conchita\'s performance was also on a
    formal level seemingly paradoxical: extremely focused and completely
    open. Unlike most of the other acts, she took the stage alone, and
    though she hardly moved at all, she nevertheless incited the audience to
    participate in numerous ways and genuinely to act out the motto of the
    contest ("Join us!"). Throughout the early rounds of the competition,
    the beard, which was at first so provocative, transformed into a
    free-floating symbol that the public began to appropriate in various
    ways. Men and women painted Conchita-like beards on their faces,
    newspapers printed beards to be cut out, and fans crocheted beards. Not
    only did someone Photoshop a beard on to a painting of Empress Sissi of
    Austria, but King Willem-Alexander of the Netherlands even tweeted a
    deceptively realistic portrait of his wife, Queen Máxima, wearing a
    beard. From one of the biggest stages of all, the evening of Wurst\'s
    victory conveyed an impression of how much the culture of Europe had
    changed in recent years, both in terms of its content and its forms.
    That which had long been restricted to subcultural niches -- the
    fluidity of gender iden­tities, appropriation as a cultural technique,
    or the conflation of reception and production, for instance -- was now
    part of the mainstream. Even while sitting in front of the television,
    this mainstream was no longer just a private audience but rather a
    multitude of singular producers whose networked activity -- on location
    or on social mass media -- lent particular significance to the occasion
    as a moment of collective self-perception.

    It is more than half a century since Marshall McLuhan announced the end
    of the Modern era, a cultural epoch that he called the Gutenberg Galaxy
    in honor of the print medium by which it was so influenced. What was
    once just an abstract speculation of media theory, however, now
    describes []{#Page_2 type="pagebreak" title="2"}the concrete reality of
    our everyday life. What\'s more, we have moved well past McLuhan\'s
    diagnosis: the erosion of old cultural forms, institutions, and
    certainties is not just something we affirm, but new ones have already
    formed whose contours are easy to identify not only in niche sectors but
    in the mainstream. Shortly before Conchita\'s triumph, Facebook thus
    expanded the gender-identity options for its billion-plus users from 2
    to 60. In addition to "male" and "female," users of the English version
    of the site can now choose from among the following categories:

    ::: {.extract}
    Agender, Androgyne, Androgynes, Androgynous, Asexual, Bigender, Cis, Cis
    Female, Cis Male, Cis Man, Cis Woman, Cisgender, Cisgender Female,
    Cisgender Male, Cisgender Man, Cisgender Woman, Female to Male (FTM),
    Female to Male Trans Man, Female to Male Transgender Man, Female to Male
    Transsexual Man, Gender Fluid, Gender Neutral, Gender Nonconforming,
    Gender Questioning, Gender Variant, Genderqueer, Hermaphrodite,
    Intersex, Intersex Man, Intersex Person, Intersex Woman, Male to Female
    (MTF), Male to Female Trans Woman, Male to Female Transgender Woman,
    Male to Female Transsexual Woman, Neither, Neutrois, Non-Binary, Other,
    Pangender, Polygender, T\*Man, Trans, Trans Female, Trans Male, Trans
    Man, Trans Person, Trans\*Female, Trans\*Male, Trans\*Man,
    Trans\*Person, Trans\*Woman, Transexual, Transexual Female, Transexual
    Male, Transexual Man, Transexual Person, Transexual Woman, Transgender
    Female, Transgender Person, Transmasculine, T\*Woman, Two\*Person,
    Two-Spirit, Two-Spirit Person.
    :::

    This enormous proliferation of cultural possibilities is an expression
    of what I will refer to below as the digital condition. Far from being
    universally welcomed, its growing presence has also instigated waves of
    nostalgia, diffuse resentments, and intellectual panic. Conservative and
    reactionary movements, which oppose such developments and desire to
    preserve or even re-create previous conditions, have been on the rise.
    Likewise in 2014, for instance, a cultural dispute broke out in normally
    subdued Baden-Würtemberg over which forms of sexual partnership should
    be mentioned positively in the sexual education curriculum. Its impetus
    was a working paper released at the end of 2013 by the state\'s
    []{#Page_3 type="pagebreak" title="3"}Ministry of Culture. Among other
    things, it proposed that adolescents "should confront their own sexual
    identity and orientation \[...\] from a position of acceptance with
    respect to sexual diversity."[^2^](#f6-note-0002){#f6-note-0002a} In a
    short period of time, a campaign organized mainly through social mass
    media collected more than 200,000 signatures in opposition to the
    proposal and submitted them to the petitions committee at the state
    parliament. At that point, the government responded by putting the
    initiative on ice. However, according to the analysis presented in this
    book, leaving it on ice creates a precarious situation.

    The rise and spread of the digital condition is the result of a
    wide-ranging and irreversible cultural transformation, the beginnings of
    which can in part be traced back to the nineteenth century. Since the
    1960s, however, this shift has accelerated enormously and has
    encompassed increasingly broader spheres of social life. More and more
    people have been participating in cultural processes; larger and larger
    dimensions of existence have become battlegrounds for cultural disputes;
    and social activity has been intertwined with increasingly complex
    technologies, without which it would hardly be possible to conceive of
    these processes, let alone achieve them. The number of competing
    cultural projects, works, reference points, and reference systems has
    been growing rapidly. This, in turn, has caused an escalating crisis for
    the established forms and institutions of culture, which are poorly
    equipped to deal with such an inundation of new claims to meaning. Since
    roughly the year 2000, many previously independent developments have
    been consolidating, gaining strength and modifying themselves to form a
    new cultural constellation that encompasses broad segments of society --
    a new galaxy, as McLuhan might have
    said.[^3^](#f6-note-0003){#f6-note-0003a} These days it is relatively
    easy to recognize the specific forms that characterize it as a whole and
    how these forms have contributed to new, contradictory and
    conflict-laden political dynamics.

    My argument, which is restricted to cultural developments in the
    (transatlantic) West, is divided into three chapters. In the first, I
    will outline the *historical* developments that have given rise to this
    quantitative and qualitative change and have led to the crisis faced by
    the institutions of the late phase of the Gutenberg Galaxy, which
    defined the last third []{#Page_4 type="pagebreak" title="4"}of the
    twentieth century.[^4^](#f6-note-0004){#f6-note-0004a} The expansion of
    the social basis of cultural processes will be traced back to changes in
    the labor market, to the self-empowerment of marginalized groups, and to
    the dissolution of centralized cultural geography. The broadening of
    cultural fields will be discussed in terms of the rise of design as a
    general creative discipline, and the growing significance of complex
    technologies -- as fundamental components of everyday life -- will be
    tracked from the beginnings of independent media up to the development
    of the internet as a mass medium. These processes, which at first
    unfolded on their own and may have been reversible on an individual
    basis, are integrated today and represent a socially domin­ant component
    of the coherent digital condition. From the perspective of cultural
    studies and media theory, the second chapter will delineate the already
    recognizable features of this new culture. Concerned above all with the
    analysis of forms, its focus is thus on the question of "how" cultural
    practices operate. It is only because specific forms of culture,
    exchange, and expression are prevalent across diverse var­ieties of
    content, social spheres, and locations that it is even possible to speak
    of the digital condition in the singular. Three examples of such forms
    stand out in particular. *Referentiality* -- that is, the use of
    existing cultural materials for one\'s own production -- is an essential
    feature of many methods for inscribing oneself into cultural processes.
    In the context of unmanageable masses of shifting and semantically open
    reference points, the act of selecting things and combining them has
    become fundamental to the production of meaning and the constitution of
    the self. The second feature that characterizes these processes is
    *communality*. It is only through a collectively shared frame of
    reference that meanings can be stabilized, possible courses of action
    can be determined, and resources can be made available. This has given
    rise to communal formations that generate self-referential worlds, which
    in turn modulate various dimensions of existence -- from aesthetic
    preferences to the methods of biological reproduction and the rhythms of
    space and time. In these worlds, the dynamics of network power have
    reconfigured notions of voluntary and involuntary behavior, autonomy,
    and coercion. The third feature of the new cultural landscape is its
    *algorithmicity*. It is characterized, in other []{#Page_5
    type="pagebreak" title="5"}words, by automated decision-making processes
    that reduce and give shape to the glut of information, by extracting
    information from the volume of data produced by machines. This extracted
    information is then accessible to human perception and can serve as the
    basis of singular and communal activity. Faced with the enormous amount
    of data generated by people and machines, we would be blind were it not
    for algorithms.

    The third chapter will focus on *political dimensions*. These are the
    factors that enable the formal dimensions described in the preceding
    chapter to manifest themselves in the form of social, political, and
    economic projects. Whereas the first chapter is concerned with long-term
    and irreversible histor­ical processes, and the second outlines the
    general cultural forms that emerged from these changes with a certain
    degree of inevitability, my concentration here will be on open-ended
    dynamics that can still be influenced. A contrast will be made between
    two political tendencies of the digital condition that are already quite
    advanced: *post-democracy* and *commons*. Both take full advantage of
    the possibilities that have arisen on account of structural changes and
    have advanced them even further, though in entirely different
    directions. "Post-democracy" refers to strategies that counteract the
    enormously expanded capacity for social communication by disconnecting
    the possibility to participate in things from the ability to make
    decisions about them. Everyone is allowed to voice his or her opinion,
    but decisions are ultimately made by a select few. Even though growing
    numbers of people can and must take responsibility for their own
    activity, they are unable to influence the social conditions -- the
    social texture -- under which this activity has to take place. Social
    mass media such as Facebook and Google will receive particular attention
    as the most conspicuous manifestations of this tendency. Here, under new
    structural provisions, a new combination of behavior and thought has
    been implemented that promotes the normalization of post-democracy and
    contributes to its otherwise inexplicable acceptance in many areas of
    society. "Commons," on the contrary, denotes approaches for developing
    new and comprehensive institutions that not only directly combine
    participation and decision-making but also integrate economic, social,
    and ethical spheres -- spheres that Modernity has tended to keep
    apart.[]{#Page_6 type="pagebreak" title="6"}

    Post-democracy and commons can be understood as two lines of development
    that point beyond the current crisis of liberal democracy and represent
    new political projects. One can be characterized as an essentially
    authoritarian system, the other as a radical expansion and renewal of
    democracy, from the notion of representation to that of participation.

    Even though I have brought together a number of broad perspectives, I
    have refrained from discussing certain topics that a book entitled *The
    Digital Condition* might be expected to address, notably the matter of
    copyright, for one example. This is easy to explain. As regards the new
    forms at the heart of this book, none of these developments requires or
    justifies copyright law in its present form. In any case, my thoughts on
    the matter were published not long ago in another book, so there is no
    need to repeat them here.[^5^](#f6-note-0005){#f6-note-0005a} The theme
    of privacy will also receive little attention. This is not because I
    share the view, held by proponents of "post-privacy," that it would be
    better for all personal information to be made available to everyone. On
    the contrary, this position strikes me as superficial and naïve. That
    said, the political function of privacy -- to safeguard a degree of
    personal autonomy from powerful institutions -- is based on fundamental
    concepts that, in light of the developments to be described below,
    urgently need to be updated. This is a task, however, that would take me
    far beyond the scope of the present
    book.[^6^](#f6-note-0006){#f6-note-0006a}

    Before moving on to the first chapter, I should first briefly explain my
    somewhat unorthodox understanding of the central concepts in the title
    of the book -- "condition" and "digital." In what follows, the term
    "condition" will be used to designate a cultural condition whereby the
    processes of social meaning -- that is, the normative dimension of
    existence -- are explicitly or implicitly negotiated and realized by
    means of singular and collective activity. Meaning, however, does not
    manifest itself in signs and symbols alone; rather, the practices that
    engender it and are inspired by it are consolidated into artifacts,
    institutions, and lifeworlds. In other words, far from being a symbolic
    accessory or mere overlay, culture in fact directs our actions and gives
    shape to society. By means of materialization and repetition, meaning --
    both as claim and as reality -- is made visible, productive, and
    negotiable. People are free to accept it, reject it, or ignore
    []{#Page_7 type="pagebreak" title="7"}it altogether. Social meaning --
    that is, meaning shared by multiple people -- can only come about
    through processes of exchange within larger or smaller formations.
    Production and reception (to the extent that it makes any sense to
    distinguish between the two) do not proceed linearly here, but rather
    loop back and reciprocally influence one another. In such processes, the
    participants themselves determine, in a more or less binding manner, how
    they stand in relation to themselves, to each other, and to the world,
    and they determine the frame of reference in which their activity is
    oriented. Accordingly, culture is not something static or something that
    is possessed by a person or a group, but rather a field of dispute that
    is subject to the activities of multiple ongoing changes, each happening
    at its own pace. It is characterized by processes of dissolution and
    constitution that may be collaborative, oppositional, or simply
    operating side by side. The field of culture is pervaded by competing
    claims to power and mechanisms for exerting it. This leads to conflicts
    about which frames of reference should be adopted for different fields
    and within different social groups. In such conflicts,
    self-determination and external determination interact until a point is
    reached at which both sides are mutually constituted. This, in turn,
    changes the conditions that give rise to shared meaning and personal
    identity.

    In what follows, this broadly post-structuralist perspective will inform
    my discussion of the causes and formational conditions of cultural
    orders and their practices. Culture will be conceived throughout as
    something heterogeneous and hybrid. It draws from many sources; it is
    motivated by the widest possible variety of desires, intentions, and
    compulsions; and it mobilizes whatever resources might be necessary for
    the constitution of meaning. This emphasis on the materiality of culture
    is also reflected in the concept of the digital. Media are relational
    technologies, which means that they facilitate certain types of
    connection between humans and
    objects.[^7^](#f6-note-0007){#f6-note-0007a} "Digital" thus denotes the
    set of relations that, on the infrastructural basis of digital networks,
    is realized today in the production, use, and transform­ation of
    material and immaterial goods, and in the constitution and coordination
    of personal and collective activity. In this regard, the focus is less
    on the dominance of a certain class []{#Page_8 type="pagebreak"
    title="8"}of technological artifacts -- the computer, for instance --
    and even less on distinguishing between "digital" and "analog,"
    "material" and "immaterial." Even in the digital condition, the analog
    has not gone away. Rather, it has been re-evaluated and even partially
    upgraded. The immaterial, moreover, is never entirely without
    materiality. On the contrary, the fleeting impulses of digital
    communication depend on global and unmistakably material infrastructures
    that extend from mines beneath the surface of the earth, from which rare
    earth metals are extracted, all the way into outer space, where
    satellites are circling around above us. Such things may be ignored
    because they are outside the experience of everyday life, but that does
    not mean that they have disappeared or that they are of any less
    significance. "Digital" thus refers to historically new possibilities
    for constituting and connecting various human and non-human actors,
    which is not limited to digital media but rather appears everywhere as a
    relational paradigm that alters the realm of possibility for numerous
    materials and actors. My understanding of the digital thus approximates
    the concept of the "post-digital," which has been gaining currency over
    the past few years within critical media cultures. Here, too, the
    distinction between "new" and "old" media and all of the ideological
    baggage associated with it -- for instance, that the new represents the
    future while the old represents the past -- have been rejected. The
    aesthetic projects that continue to define the image of the "digital" --
    immateriality, perfection, and virtuality -- have likewise been
    discarded.[^8^](#f6-note-0008){#f6-note-0008a} Above all, the
    "post-digital" is a critical response to this techno-utopian aesthetic
    and its attendant economic and political perspectives. According to the
    cultural theorist Florian Cramer, the concept accommodates the fact that
    "new ethical and cultural conventions which became mainstream with
    internet communities and open-source culture are being retroactively
    applied to the making of non-digital and post-digital media
    products."[^9^](#f6-note-0009){#f6-note-0009a} He thus cites the trend
    that process-based practices oriented toward open interaction, which
    first developed within digital media, have since begun to appear in more
    and more contexts and in an increasing number of
    materials.[^10[]{#Page_9 type="pagebreak"
    title="9"}^](#f6-note-0010){#f6-note-0010a}

    For the historical, cultural-theoretical, and political perspectives
    developed in this book, however, the concept of the post-digital is
    somewhat problematic, for it requires the narrow context of media art
    and its fixation on technology in order to become a viable
    counter-position. Without this context, certain misunderstandings are
    impossible to avoid. The prefix "post-," for instance, is often
    interpreted in the sense that something is over or that we have at least
    grasped the matters at hand and can thus turn to something new. The
    opposite is true. The most enduringly relevant developments are only now
    beginning to adopt a specific form, long after digital infrastructures
    and the practices made popular by them have become part of our everyday
    lives. Or, as the communication theorist and consultant Clay Shirky puts
    it, "Communication tools don\'t get socially interesting until they get
    technologically boring."[^11^](#f6-note-0011){#f6-note-0011a} For it is
    only today, now that our fascination for this technology has waned and
    its promises sound hollow, that culture and society are being defined by
    the digital condition in a comprehensive sense. Before, this was the
    case in just a few limited spheres. It is this hybridization and
    solidification of the digital -- the presence of the digital beyond
    digital media -- that lends the digital condition its dominance. As to
    the concrete realities in which these things will materialize, this is
    currently being decided in an open and ongoing process. The aim of this
    book is to contribute to our understanding of this process.[]{#Page_10
    type="pagebreak" title="10"}
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#f6-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#f6-note-0001a){#f6-note-0001}  Dan Biddle, "Five Million Tweets for
    \#Eurovision 2014," *Twitter UK* (May 11, 2014), online.

    [2](#f6-note-0002a){#f6-note-0002}  Ministerium für Kultus, Jugend und
    Sport -- Baden-Württemberg, "Bildungsplanreform 2015/2016 -- Verankerung
    von Leitprinzipien," online \[--trans.\].

    [3](#f6-note-0003a){#f6-note-0003}  As early as 1995, Wolfgang Coy
    suggested that McLuhan\'s metaphor should be supplanted by the concept
    of the "Turing Galaxy," but this never caught on. See his introduction
    to the German edition of *The Gutenberg Galaxy*: "Von der Gutenbergschen
    zur Turingschen Galaxis: Jenseits von Buchdruck und Fernsehen," in
    Marshall McLuhan, *Die Gutenberg Galaxis: Das Ende des Buchzeitalters*,
    (Cologne: Addison-Wesley, 1995), pp. vii--xviii.[]{#Page_176
    type="pagebreak" title="176"}

    [4](#f6-note-0004a){#f6-note-0004}  According to the analysis of the
    Spanish sociologist Manuel Castells, this crisis began almost
    simultaneously in highly developed capitalist and socialist societies,
    and it did so for the same reason: the paradigm of "industrialism" had
    reached the limits of its productivity. Unlike the capitalist societies,
    which were flexible enough to tame the crisis and reorient their
    economies, the socialism of the 1970s and 1980s experienced stagnation
    until it ultimately, in a belated effort to reform, collapsed. See
    Manuel Castells, *End of Millennium*, 2nd edn (Oxford: Wiley-Blackwell,
    2010), pp. 5--68.

    [5](#f6-note-0005a){#f6-note-0005}  Felix Stalder, *Der Autor am Ende
    der Gutenberg Galaxis* (Zurich: Buch & Netz, 2014).

    [6](#f6-note-0006a){#f6-note-0006}  For my preliminary thoughts on this
    topic, see Felix Stalder, "Autonomy and Control in the Era of
    Post-Privacy," *Open: Cahier on Art and the Public Domain* 19 (2010):
    78--86; and idem, "Privacy Is Not the Antidote to Surveillance,"
    *Surveillance & Society* 1 (2002): 120--4. For a discussion of these
    approaches, see the working paper by Maja van der Velden, "Personal
    Autonomy in a Post-Privacy World: A Feminist Technoscience Perspective"
    (2011), online.

    [7](#f6-note-0007a){#f6-note-0007}  Accordingly, the "new social" media
    are mass media in the sense that they influence broadly disseminated
    patterns of social relations and thus shape society as much as the
    traditional mass media had done before them.

    [8](#f6-note-0008a){#f6-note-0008}  Kim Cascone, "The Aesthetics of
    Failure: 'Post-Digital' Tendencies in Contemporary Computer Music,"
    *Computer Music Journal* 24/2 (2000): 12--18.

    [9](#f6-note-0009a){#f6-note-0009}  Florian Cramer, "What Is
    'Post-Digital'?" *Post-Digital Research* 3 (2014), online.

    [10](#f6-note-0010a){#f6-note-0010}  In the field of visual arts,
    similar considerations have been made regarding "post-internet art." See
    Artie Vierkant, "The Image Object Post-Internet,"
    [jstchillin.org](http://jstchillin.org) (December 2010), online; and Ian
    Wallace, "What Is Post-Internet Art? Understanding the Revolutionary New
    Art Movement," *Artspace* (March 18, 2014), online.

    [11](#f6-note-0011a){#f6-note-0011}  Clay Shirky, *Here Comes Everybody:
    The Power of Organizing without Organizations* (New York: Penguin,
    2008), p. 105.
    :::
    :::

    [I]{.chapterNumber} [Evolution]{.chapterTitle} {#c1}
    =
    ::: {.section}
    Many authors have interpreted the new cultural realities that
    characterize our daily lives as a direct consequence of technological
    developments: the internet is to blame! This assumption is not only
    empirically untenable; it also leads to a problematic assessment of the
    current situation. Apparatuses are represented as "central actors," and
    this suggests that new technologies have suddenly revolutionized a
    situation that had previously been stable. Depending on one\'s point of
    view, this is then regarded as "a blessing or a
    curse."[^1^](#c1-note-0001){#c1-note-0001a} A closer examination,
    however, reveals an entirely different picture. Established cultural
    practices and social institutions had already been witnessing the
    erosion of their self-evident justification and legitimacy, long before
    they were faced with new technologies and the corresponding demands
    these make on individuals. Moreover, the allegedly new types of
    coordination and cooperation are also not so new after all. Many of them
    have existed for a long time. At first most of them were totally
    separate from the technologies for which, later on, they would become
    relevant. It is only in retrospect that these developments can be
    identified as beginnings, and it can be seen that much of what we regard
    today as novel or revolutionary was in fact introduced at the margins of
    society, in cultural niches that were unnoticed by the dominant actors
    and institutions. The new technologies thus evolved against a
    []{#Page_11 type="pagebreak" title="11"}background of processes of
    societal transformation that were already under way. They could only
    have been developed once a vision of their potential had been
    formulated, and they could only have been disseminated where demand for
    them already existed. This demand was created by social, political, and
    economic crises, which were themselves initiated by changes that were
    already under way. The new technologies seemed to provide many differing
    and promising answers to the urgent questions that these crises had
    prompted. It was thus a combination of positive vision and pressure that
    motivated a great variety of actors to change, at times with
    considerable effort, the established processes, mature institutions, and
    their own behavior. They intended to appropriate, for their own
    projects, the various and partly contradictory possibilities that they
    saw in these new technologies. Only then did a new technological
    infrastructure arise.

    This, in turn, created the preconditions for previously independent
    developments to come together, strengthening one another and enabling
    them to spread beyond the contexts in which they had originated. Thus,
    they moved from the margins to the center of culture. And by
    intensifying the crisis of previously established cultural forms and
    institutions, they became dominant and established new forms and
    institutions of their own.
    :::

    ::: {.section}
    The Expansion of the Social Basis of Culture {#c1-sec-0002}
    --------------------------------------------

    Watching television discussions from the 1950s and 1960s today, one is
    struck not only by the billows of cigarette smoke in the studio but also
    by the homogeneous spectrum of participants. Usually, it was a group of
    white and heteronormatively behaving men speaking with one
    another,[^2^](#c1-note-0002){#c1-note-0002a} as these were the people
    who held the important institutional positions in the centers of the
    West. As a rule, those involved were highly specialized representatives
    from the cultural, economic, scientific, and political spheres. Above
    all, they were legitimized to appear in public to articulate their
    opinions, which were to be regarded by others as relevant and worthy of
    discussion. They presided over the important debates of their time. With
    few exceptions, other actors and their deviant opinions -- there
    []{#Page_12 type="pagebreak" title="12"}has never been a time without
    them -- were either not taken seriously at all or were categorized as
    indecent, incompetent, perverse, irrelevant, backward, exotic, or
    idiosyncratic.[^3^](#c1-note-0003){#c1-note-0003a} Even at that time,
    the social basis of culture was beginning to expand, though the actors
    at the center of the discourse had failed to notice this. Communicative
    and cultural pro­cesses were gaining significance in more and more
    places, and excluded social groups were self-consciously developing
    their own language in order to intervene in the discourse. The rise of
    the knowledge economy, the increasingly loud critique of
    heteronormativity, and a fundamental cultural critique posed by
    post-colonialism enabled a greater number of people to participate in
    public discussions. In what follows, I will subject each of these three
    phenomena to closer examin­ation. In order to do justice to their
    complexity, I will treat them on different levels: I will depict the
    rise of the knowledge economy as a structural change in labor; I will
    reconstruct the critique of heteronormativity by outlining the origins
    and transformations of the gay movement in West Germany; and I will
    discuss post-colonialism as a theory that introduced new concepts of
    cultural multiplicity and hybridization -- concepts that are now
    influencing the digital condition far beyond the limits of the
    post-colonial discourse, and often without any reference to this
    discourse at all.

    ::: {.section}
    ### The growth of the knowledge economy {#c1-sec-0003}

    At the beginning of the 1950s, the Austrian-American economist Fritz
    Machlup was immersed in his study of the polit­ical economy of
    monopoly.[^4^](#c1-note-0004){#c1-note-0004a} Among other things, he was
    concerned with patents and copyright law. In line with the neo-classical
    Austrian School, he considered both to be problematic (because
    state-created) monopolies.[^5^](#c1-note-0005){#c1-note-0005a} The
    longer he studied the monopoly of the patent system in particular, the
    more far-reaching its consequences seemed to him. He maintained that the
    patent system was intertwined with something that might be called the
    "economy of invention" -- ultimately, patentable insights had to be
    produced in the first place -- and that this was in turn part of a much
    larger economy of knowledge. The latter encompassed government agencies
    as well as institutions of education, research, and development
    []{#Page_13 type="pagebreak" title="13"}(that is, schools, universities,
    and certain corporate laboratories), which had been increasing steadily
    in number since Roosevelt\'s New Deal. Yet it also included the
    expanding media sector and those industries that were responsible for
    providing technical infrastructure. Machlup subsumed all of these
    institutions and sectors under the concept of the "knowledge economy," a
    term of his own invention. Their common feature was that essential
    aspects of their activities consisted in communicating things to other
    people ("telling anyone anything," as he put it). Thus, the employees
    were not only recipients of information or instructions; rather, in one
    way or another, they themselves communicated, be it merely as a
    secretary who typed up, edited, and forwarded a piece of shorthand
    dictation. In his book *The Production and Distribution of Knowledge in
    the United States*, published in 1962, Machlup gathered empirical
    material to demonstrate that the American economy had entered a new
    phase that was distinguished by the production, exchange, and
    application of abstract, codified
    knowledge.[^6^](#c1-note-0006){#c1-note-0006a} This opinion was no
    longer entirely novel at the time, but it had never before been
    presented in such an empirically detailed and comprehensive
    manner.[^7^](#c1-note-0007){#c1-note-0007a} The extent of the knowledge
    economy surprised Machlup himself: in his book, he concluded that as
    much as 43 percent of all labor activity was already engaged in this
    sector. This high number came about because, until then, no one had put
    forward the idea of understanding such a variety of activities as a
    single unit.

    Machlup\'s categorization was indeed quite innovative, for the dynamics
    that propelled the sectors that he associated with one another not only
    were very different but also had originated as an integral component in
    the development of the industrial production of goods. They were more of
    an extension of such production than a break with it. The production and
    circulation of goods had been expanding and accelerating as early as the
    nineteenth century, though at highly divergent rates from one region or
    sector to another. New markets were created in order to distribute goods
    that were being produced in greater numbers; new infrastructure for
    transportation and communication was established in order to serve these
    large markets, which were mostly in the form of national territories
    (including their colonies). This []{#Page_14 type="pagebreak"
    title="14"}enabled even larger factories to be built in order to
    exploit, to an even greater extent, the cost advantages of mass
    production. In order to control these complex processes, new professions
    arose with different types of competencies and working conditions. The
    office became a workplace for an increasing number of people -- men and
    women alike -- who, in one form or another, had something to do with
    information processing and communication. Yet all of this required not
    only new management techniques. Production and products also became more
    complex, so that entire corporate sectors had to be restructured.
    Whereas the first decisive inventions of the industrial era were still
    made by more or less educated tinkerers, during the last third of the
    nineteenth century, invention itself came to be institutionalized. In
    Germany, Siemens (founded in 1847 as the Telegraphen-Bauanstalt von
    Siemens & Halske) exemplifies this transformation. Within 50 years, a
    company that began in a proverbial workshop in a Berlin backyard became
    a multinational high-tech corporation. It was in such corporate
    laboratories, which were established around the year 1900, that the
    "industrialization of invention" or the "scientification of industrial
    production" took place.[^8^](#c1-note-0008){#c1-note-0008a} In other
    words, even the processes employed in factories and the goods that they
    produced became knowledge-intensive. Their invention, planning, and
    production required a steadily growing expansion of activities, which
    today we would refer to as research and development. The informatization
    of the economy -- the acceleration of mass production, the comprehensive
    application of scientific methods to the organization of labor, and the
    central role of research and development in industry -- was hastened
    enormously by a world war that was waged on an industrial scale to an
    extent that had never been seen before.

    Another important factor for the increasing significance of the
    knowledge economy was the development of the consumer society. Over the
    course of the last third of the nineteenth century, despite dramatic
    regional and social disparities, an increasing number of people profited
    from the economic growth that the Industrial Revolution had instigated.
    Wages increased and basic needs were largely met, so that a new social
    stratum arose, the middle class, which was able to spend part of its
    income on other things. But on what? First, []{#Page_15 type="pagebreak"
    title="15"}new needs had to be created. The more production capacities
    increased, the more they had to be rethought in terms of consumption.
    Thus, in yet another way, the economy became more knowledge-intensive.
    It was now necessary to become familiar with, understand, and stimulate
    the interests and preferences of consumers, in order to entice them to
    purchase products that they did not urgently need. This knowledge did
    little to enhance the material or logistical complexity of goods or
    their production; rather, it was reflected in the increasingly extensive
    communication about and through these goods. The beginnings of this
    development were captured by Émile Zola in his 1883 novel *The Ladies\'
    Paradise*, which was set in the new world of a semi-fictitious
    department store bearing that name. In its opening scene, the young
    protagonist Denise Baudu and her brother Jean, both of whom have just
    moved to Paris from a provincial town, encounter for the first time the
    artfully arranged women\'s clothing -- exhibited with all sorts of
    tricks involving lighting, mirrors, and mannequins -- in the window
    displays of the store. The sensuality of the staged goods is so
    overwhelming that both of them are not only struck dumb, but Jean even
    blushes.
    It was the economy of affects that brought blood to Jean\'s cheeks. At
    that time, strategies for attracting the attention of customers did not
    yet have a scientific and systematic basis. Just as the first inventions
    in the age of industrialization were made by amateurs, so too was the
    economy of affects developed intuitively and gradually rather than as a
    planned or conscious paradigm shift. That it was possible to induce and
    direct affects by means of targeted communication was the pioneering
    discovery of the Austrian-American Edward Bernays. During the 1920s, he
    combined the ideas of his uncle Sigmund Freud about unconscious
    motivations with the sociological research methods of opinion surveys to
    form a new discipline: market
    research.[^9^](#c1-note-0009){#c1-note-0009a} It became the scientific
    basis of a new field of activity, which he at first called "propa­ganda"
    but then later referred to as "public
    relations."[^10^](#c1-note-0010){#c1-note-0010a} Public communication,
    be it for economic or political ends, was now placed on a systematic
    foundation that came to distance itself more and more from the pure
    "conveyance of information." Communication became a strategic field for
    corporate and political disputes, and the mass media []{#Page_16
    type="pagebreak" title="16"}became their locus of negotiation. Between
    1880 and 1917, for instance, commercial advertising costs in the United
    States increased by more than 800 percent, and the leading advertising
    firms, using the same techniques with which they attracted consumers to
    products, were successful in selling to the American public the idea of
    their nation entering World War I. Thus, a media industry in the modern
    sense was born, and it expanded along with the rapidly growing market
    for advertising.[^11^](#c1-note-0011){#c1-note-0011a}

    In his studies of labor markets conducted at the beginning of the 1960s,
    Machlup brought these previously separ­ate developments together and
    thus explained the existence of an already advanced knowledge economy in
    the United States. His arguments fell on extremely fertile soil, for an
    intellectual transformation had taken place in other areas of science as
    well. A few years earlier, for instance, cybernetics had given the
    concepts "information" and "communication" their first scientifically
    precise (if somewhat idiosyncratic) definitions and had assigned to them
    a position of central importance in all scientific disciplines, not to
    mention life in general.[^12^](#c1-note-0012){#c1-note-0012a} Machlup\'s
    investigation seemed to confirm this in the case of the economy, given
    that the knowledge economy was primarily concerned with information and
    communication. Since then, numerous analyses, formulas, and slogans have
    repeated, modified, refined, and criticized the idea that the
    knowledge-based activities of the economy have become increasingly
    important. In the 1970s this discussion was associated above all with
    the notion of the "post-industrial
    society,"[^13^](#c1-note-0013){#c1-note-0013a} in the 1980s the guiding
    idea was the "information society,"[^14^](#c1-note-0014){#c1-note-0014a}
    and in the 1990s the debate revolved around the "network
    society"[^15^](#c1-note-0015){#c1-note-0015a} -- to name just the most
    popular concepts. What these approaches have in common is that they each
    diagnose a comprehensive societal transformation that, as regards the
    creation of economic value or jobs, has shifted the balance from
    productive to communicative activ­ities. Accordingly, they presuppose
    that we know how to distinguish the former from the latter. This is not
    unproblematic, however, because in practice the two are usually tightly
    intertwined. Moreover, whoever maintains that communicative activities
    have taken the place of industrial production in our society has adopted
    a very narrow point of []{#Page_17 type="pagebreak" title="17"}view.
    Factory jobs have not simply disappeared; they have just been partially
    relocated outside of Western economies. The assertion that communicative
    activities are somehow of "greater value" hardly chimes with the reality
    of today\'s new "service jobs," many of which pay no more than the
    minimum wage.[^16^](#c1-note-0016){#c1-note-0016a} Critiques of this
    sort, however, have done little to reduce the effectiveness of this
    analysis -- especially its political effectiveness -- for it does more
    than simply describe a condition. It also contains a set of political
    instructions that imply or directly demand that precisely those sectors
    should be promoted that it considers economically promising, and that
    society should be reorganized accordingly. Since the 1970s, there has
    thus been a feedback loop between scientific analysis and political
    agendas. More often than not, it is hardly possible to distinguish
    between the two. Especially in Britain and the United States, the
    economic transformation of the 1980s was imposed insistently and with
    political calculation (the weakening of labor unions).

    There are, however, important differences between the developments of
    the so-called "post-industrial society" of the 1970s and those of the
    so-called "network society" of the 1990s, even if both terms are
    supposed to stress the increased significance of information, knowledge,
    and communication. With regard to the digital condition, the most
    important of these differences are the greater flexibility of economic
    activity in general and employment relations in particular, as well as
    the dismantling of social security systems. Neither phenomenon played
    much of a role in analyses of the early 1970s. The development since
    then can be traced back to two currents that could not seem more
    different from one another. At first, flexibility was demanded in the
    name of a critique of the value system imposed by bureaucratic-bourgeois
    society (including the traditional organization of the workforce). It
    originated in the new social movements that had formed in the late
    1960s. Later on, toward the end of the 1970s, it then became one of the
    central points of the neoliberal critique of the welfare state. With
    completely different motives, both sides sang the praises of autonomy
    and spontaneity while rejecting the disciplinary nature of hierarchical
    organization. They demanded individuality and diversity rather than
    conformity to prescribed roles. Experimentation, openness to []{#Page_18
    type="pagebreak" title="18"}new ideas, flexibility, and change were now
    established as fundamental values with positive connotations. Both
    movements operated with the attractive idea of personal freedom. The new
    social movements understood this in a social sense as the freedom of
    personal development and coexistence, whereas neoliberals understood it
    in an economic sense as the freedom of the market. In the 1980s, the
    neoliberal ideas prevailed in large part because some of the values,
    strategies, and methods propagated by the new social movements were
    removed from their political context and appropriated in order to
    breathe new life -- a "new spirit" -- into capitalism and thus to rescue
    industrial society from its crisis.[^17^](#c1-note-0017){#c1-note-0017a}
    An army of management consultants, restructuring experts, and new
    companies began to promote flat hierarchies, self-responsibility, and
    innovation; with these aims in mind, they set about reorganizing large
    corporations into small and flexible units. Labor and leisure were no
    longer supposed to be separated, for all aspects of a given person could
    be integrated into his or her work. In order to achieve economic success
    in this new capitalism, it became necessary for every individual to
    identify himself or herself with his or her profession. Large
    corporations were restructured in such a way that entire departments
    found themselves transformed into independent "profit centers." This
    happened in the name of creating more leeway for decision-making and of
    optimizing the entrepreneurial spirit on all levels, the goals being to
    increase value creation and to provide management with more fine-grained
    powers of intervention. These measures, in turn, created the need for
    computers and the need for them to be networked. Large corporations
    reacted in this way to the emergence of highly specialized small
    companies which, by networking and cooperating with other firms,
    succeeded in quickly and flexibly exploiting niches in the expanding
    global markets. In the management literature of the 1980s, the
    catchphrases for this were "company networks" and "flexible
    specialization."[^18^](#c1-note-0018){#c1-note-0018a} By the middle of
    the 1990s, the sociologist Manuel Castells was able to conclude that the
    actual productive entity was no longer the individual company but rather
    the network consisting of companies and corporate divisions of various
    sizes. In Castells\'s estimation, the decisive advantage of the network
    is its ability to customize its elements and their configuration
    []{#Page_19 type="pagebreak" title="19"}to suit the rapidly changing
    requirements of the "project" at
    hand.[^19^](#c1-note-0019){#c1-note-0019a} Aside from a few exceptions,
    companies in their trad­itional forms came to function above all as
    strategic control centers and as economic and legal units.

    This economic structural transformation was already well under way when
    the internet emerged as a mass medium around the turn of the millennium.
    As a consequence, change became more radical and penetrated into an
    increasing number of areas of value creation. The political agenda
    oriented itself toward the vision of "creative industries," a concept
    developed in 1997 by the newly elected British government under Tony
    Blair. A Creative Industries Task Force was established right away, and
    its first step was to identify "those activities which have their
    origins in individual creativity, skill and talent and which have the
    potential for wealth and job creation through the generation and
    exploit­ation of intellectual
    property."[^20^](#c1-note-0020){#c1-note-0020a} Like Fritz Machlup at
    the beginning of the 1960s, the task force brought together existing
    areas of activity into a new category. Such activities included
    advertising, computer games, architecture, music, arts and antique
    markets, publishing, design, software and computer services, fashion,
    television and radio, and film and video. The latter were elevated to
    matters of political importance on account of their potential to create
    wealth and jobs. Not least because of this clever presentation of
    categories -- no distinction was made between the BBC, an almighty
    public-service provider, and fledgling companies in precarious
    circumstances -- it was possible to proclaim not only that the creative
    industries were contributing a relevant portion of the nation\'s
    economic output, but also that this sector was growing at an especially
    fast rate. It was reported that, in London, the creative industries were
    already responsible for one out of every five new jobs. When compared
    with traditional terms of employment as regards income, benefits, and
    prospects for advancement, however, many of these positions entailed a
    considerable downgrade for the employees in question (who were now
    treated as independent contractors). This fact was either ignored or
    explicitly interpreted as a sign of the sector\'s particular
    dynamism.[^21^](#c1-note-0021){#c1-note-0021a} Around the turn of the
    new millennium, the idea that individual creativity plays a central role
    in the economy was given further traction by []{#Page_20
    type="pagebreak" title="20"}the sociologist and consultant Richard
    Florida, who argued that creativity was essential to the future of
    cities and even announced the rise of the "creative class." As to the
    preconditions that have to be met in order to tap into this source of
    wealth, he devised a simple formula that would be easy for municipal
    bureaucrats to understand: "technology, tolerance and talent." Talent,
    as defined by Florida, is based on individual creativity and education
    and manifests itself in the ability to generate new jobs. He was thus
    able to declare talent a central element of economic
    growth.[^22^](#c1-note-0022){#c1-note-0022a} In order to "unleash" these
    resources, what we need in addition to technology is, above all,
    tolerance; that is, "an open culture -- one that does not discriminate,
    does not force people into boxes, allows us to be ourselves, and
    validates various forms of family and of human
    identity."[^23^](#c1-note-0023){#c1-note-0023a}

    The idea that a public welfare state should ensure the social security
    of individuals was considered obsolete. Collective institutions, which
    could have provided a degree of stability for people\'s lifestyles, were
    dismissed or regarded as bureaucratic obstacles. The more or less
    directly evoked role model for all of this was the individual artist,
    who was understood as an individual entrepreneur, a sort of genius
    suitable for the masses. For Florida, a central problem was that,
    according to his own calculations, only about a third of the people
    living in North American and European cities were working in the
    "creative sector," while the innate creativity of everyone else was
    going to waste. Even today, the term "creative industry," along with the
    assumption that the internet will provide increased opportunities,
    serves to legitimize the effort to restructure all areas of the economy
    according to the needs of the knowledge economy and to privilege the
    network over the institution. In times of social cutbacks and empty
    public purses, especially in municipalities, this message was warmly
    received. One mayor, who as the first openly gay top politician in
    Germany exemplified tolerance for diverse lifestyles, even adopted the
    slogan "poor but sexy" for his city. Everyone was supposed to exploit
    his or her own creativity to discover new niches and opportunities for
    monet­ization -- a magic formula that was supposed to bring about a new
    urban revival. Today there is hardly a city in Europe that does not
    issue a report about its creative economy, []{#Page_21 type="pagebreak"
    title="21"}and nearly all of these reports cite, directly or indirectly,
    Richard Florida.

    As already seen in the context of the knowledge economy, so too in the
    case of creative industries do measurable social change, wishful
    thinking, and political agendas blend together in such a way that it is
    impossible to identify a single cause for the developments taking place.
    The consequences, however, are significant. Over the last two
    generations, the demands of the labor market have fundamentally changed.
    Higher education and the ability to acquire new knowledge independently
    are now, to an increasing extent, required and expected as
    qualifications and personal attributes. The desired or enforced ability
    to be flexible at work, the widespread cooperation across institutions,
    the uprooted nature of labor, and the erosion of collective models for
    social security have displaced many activities, which once took place
    within clearly defined institutional or personal limits, into a new
    interstitial space that is neither private nor public in the classical
    sense. This is the space of networks, communities, and informal
    cooperation -- the space of sharing and exchange that has since been
    enabled by the emergence of ubiquitous digital communication. It allows
    an increasing number of people, whether willingly or otherwise, to
    envision themselves as active producers of information, knowledge,
    capability, and meaning. And because it is associated in various ways
    with the space of market-based exchange and with the bourgeois political
    sphere, it has lasting effects on both. This interstitial space becomes
    all the more important as fewer people are willing or able to rely on
    traditional institutions for their economic security. For, within it,
    personal and digital-based networks can and must be developed as
    alternatives, regardless of whether they prove sustainable for the long
    term. As a result, more and more actors, each with their own claims to
    meaning, have been rushing away from the private personal sphere into
    this new interstitial space. By now, this has become such a normal
    practice that whoever is *not* active in this ever-expanding
    interstitial space, which is rapidly becoming the main social sphere --
    whoever, that is, lacks a publicly visible profile on social mass media
    like Facebook, or does not number among those producing information and
    meaning and is thus so inconspicuous online as []{#Page_22
    type="pagebreak" title="22"}to yield no search results -- now stands out
    in a negative light (or, in far fewer cases, acquires a certain prestige
    on account of this very absence).
    :::

    ::: {.section}
    ### The erosion of heteronormativity {#c1-sec-0004}

    In this (sometimes more, sometimes less) public space for the continuous
    production of social meaning (and its exploit­ation), there is no
    question that the professional middle class is
    over-represented.[^24^](#c1-note-0024){#c1-note-0024a} It would be
    short-sighted, however, to reduce those seeking autonomy and the
    recognition of individuality and social diversity to the role of poster
    children for the new spirit of
    capitalism.[^25^](#c1-note-0025){#c1-note-0025a} The new social
    movements, for instance, initiated a social shift that has allowed an
    increasing number of people to demand, if nothing else, the right to
    participate in social life in a self-determined manner; that is,
    according to their own standards and values.

    Especially effective was the critique of patriarchal and heteronormative
    power relations, modes of conduct, and
    identities.[^26^](#c1-note-0026){#c1-note-0026a} In the context of the
    political upheavals at the end of the 1960s, the new women\'s and gay
    movements developed into influential actors. Their greatest achievement
    was to establish alternative cultural forms, lifestyles, and strategies
    of action in or around the mainstream of society. How this was done can
    be demonstrated by tracing, for example, the development of the gay
    movement in West Germany.

    In the fall of 1969, the liberalization of Paragraph 175 of the German
    Criminal Code came into effect. From then on, sexual activity between
    adult men was no longer punishable by law (women were not mentioned in
    this context). For the first time, a man could now express himself as a
    homosexual outside of semi-private space without immediately being
    exposed to the risk of criminal prosecution. This was a necessary
    precondition for the ability to defend one\'s own rights. As early as
    1971, the struggle for the recognition of gay life experiences reached
    the broader public when Rosa von Praunheim\'s film *It Is Not the
    Homosexual Who Is Perverse, but the Society in Which He Lives* was
    screened at the Berlin International Film Festival and then, shortly
    thereafter, broadcast on public television in North Rhine-Westphalia.
    The film, which is firmly situated in the agitprop tradition,
    []{#Page_23 type="pagebreak" title="23"}follows a young provincial man
    through the various milieus of Berlin\'s gay subcultures: from a
    monogamous relationship to nightclubs and public bathrooms until, at the
    end, he is enlightened by a political group of men who explain that it
    is not possible to lead a free life in a niche, as his own emancipation
    can only be achieved by a transformation of society as a whole. The film
    closes with a not-so-subtle call to action: "Out of the closets, into
    the streets!" Von Praunheim understood this emancipation to be a process
    that encompassed all areas of life and had to be carried out in public;
    it could only achieve success, moreover, in solidarity with other
    freedom movements such as the Black Panthers in the United States and
    the new women\'s movement. The goal, according to this film, is to
    articulate one\'s own identity as a specific and differentiated identity
    with its own experiences, values, and reference systems, and to anchor
    this identity within a society that not only tolerates it but also
    recognizes it as having equal validity.

    At first, however, the film triggered vehement controversies, even
    within the gay scene. The objection was that it attacked the gay
    subculture, which was not yet prepared to defend itself publicly against
    discrimination. Despite or (more likely) because of these controversies,
    more than 50 groups of gay activists soon formed in Germany. Such
    groups, largely composed of left-wing alternative students, included,
    for instance, the Homosexuelle Aktion Westberlin (HAW) and the Rote
    Zelle Schwul (RotZSchwul) in Frankfurt am
    Main.[^27^](#c1-note-0027){#c1-note-0027a} One focus of their activities
    was to have Paragraph 175 struck entirely from the legal code (which was
    not achieved until 1994). This cause was framed within a general
    struggle to overcome patriarchy and capitalism. At the earliest gay
    demonstrations in Germany, which took place in Münster in April 1972,
    protesters rallied behind the following slogan: "Brothers and sisters,
    gay or not, it is our duty to fight capitalism." This was understood as
    a necessary subordination to the greater struggle against what was known
    in the terminology of left-wing radical groups as the "main
    contradiction" of capitalism (that between capital and labor), and it
    led to strident differences within the gay movement. The dispute
    escalated during the next year. After the so-called *Tuntenstreit*, or
    "Battle of the Queens," which was []{#Page_24 type="pagebreak"
    title="24"}initiated by activists from Italy and France who had appeared
    in drag at the closing ceremony of the HAW\'s Spring Meeting in West
    Berlin, the gay movement was divided, or at least moving in a new
    direction. At the heart of the matter were the following questions: "Is
    there an inherent (many speak of an autonomous) position that gays hold
    with respect to the issue of homosexuality? Or can a position on
    homosexuality only be derived in association with the traditional
    workers\' movement?"[^28^](#c1-note-0028){#c1-note-0028a} In other
    words, was discrimination against homosexuality part of the social
    divide caused by capitalism (that is, one of its "ancillary
    contradictions") and thus only to be overcome by overcoming capitalism
    itself, or was it something unrelated to the "essence" of capitalism, an
    independent conflict requiring different strategies and methods? This
    conflict could never be fully resolved, but the second position, which
    was more interested in overcoming legal, social, and cultural
    discrimination than in struggling against economic exploitation, and
    which focused specifically on the social liberation of gays, proved to
    be far more dynamic in the long term. This was not least because both
    the old and new left were themselves not free of homophobia and because
    the entire radical student movement of the 1970s fell into crisis.

    Over the course of the 1970s and 1980s, "aesthetic self-empowerment" was
    realized through the efforts of artistic and (increasingly) commercial
    producers of images, texts, and
    sounds.[^29^](#c1-note-0029){#c1-note-0029a} Activists, artists, and
    intellectuals developed a language with which they could speak
    assertively in public about topics that had previously been taboo.
    Inspired by the expression "gay pride," which originated in the United
    States, they began to use the term *schwul* ("gay"), which until then
    had possessed negative connotations, with growing confidence. They
    founded numerous gay and lesbian cultural initiatives, theaters,
    publishing houses, magazines, bookstores, meeting places, and other
    associations in order to counter the misleading or (in their eyes)
    outright false representations of the mass media with their own
    multifarious media productions. In doing so, they typically followed a
    dual strategy: on the one hand, they wanted to create a space for the
    members of the movement in which it would be possible to formulate and
    live different identities; on the other hand, they were fighting to be
    accepted by society at large. While []{#Page_25 type="pagebreak"
    title="25"}a broader and broader spectrum of gay positions, experiences,
    and aesthetics was becoming visible to the public, the connection to
    left-wing radical contexts became weaker. Founded as early as 1974, and
    likewise in West Berlin, the General Homosexual Working Group
    (Allgemeine Homosexuelle Arbeitsgemeinschaft) sought to integrate gay
    politics into mainstream society by defining the latter -- on the basis
    of bourgeois, individual rights -- as a "politics of
    anti-discrimination." These efforts achieved a milestone in 1980 when,
    in the run-up to the parliamentary election, a podium discussion was
    held with representatives of all major political parties on the topic of
    the law governing sexual offences. The discussion took place in the
    Beethovenhalle in Bonn, which was the largest venue for political events
    in the former capital. Several participants considered the event to be a
    "disaster,"[^30^](#c1-note-0030){#c1-note-0030a} for it revived a number
    of internal conflicts (not least that between revolutionary and
    integrative positions). Yet the fact remains that representatives were
    present from every political party, and this alone was indicative of an
    unprecedented amount of public awareness for those demanding equal
    rights.

    The struggle against discrimination and for social recognition reached
    an entirely new level of urgency with the outbreak of HIV/AIDS. In 1983,
    the magazine *Der Spiegel* devoted its first cover story to the disease,
    thus bringing it to the awareness of the broader public. In the same
    year, the non-profit organization Deutsche Aids-Hilfe was founded to
    prevent further cases of discrimination, for *Der Spiegel* was not the
    only publication at the time to refer to AIDS as a "homosexual
    epidemic."[^31^](#c1-note-0031){#c1-note-0031a} The struggle against
    HIV/AIDS required a comprehensive mobilization. Funding had to be raised
    in order to deal with the social repercussions of the epidemic, to teach
    people about safe sexual practices for everyone and to direct research
    toward discovering causes and developing potential cures. The immediate
    threat that AIDS represented, especially while so little was known about
    the illness and its treatment remained a distant hope, created an
    impetus for mobilization that led to alliances between the gay movement,
    the healthcare system, and public authorities. Thus, the AIDS Inquiry
    Committee, sponsored by the conservative Christian Democratic Union,
    concluded in 1988 that, in the fight against the illness, "the
    homosexual subculture is []{#Page_26 type="pagebreak"
    title="26"}especially important. This informal structure should
    therefore neither be impeded nor repressed but rather, on the contrary,
    recognized and supported."[^32^](#c1-note-0032){#c1-note-0032a} The AIDS
    crisis proved to be a catalyst for advancing the integration of gays
    into society and for expanding what could be regarded as acceptable
    lifestyles, opinions, and cultural practices. As a consequence,
    homosexuals began to appear more frequently in the media, though their
    presence would never match that of hetero­sexuals. As of 1985, the
    television show *Lindenstraße* featured an openly gay protagonist, and
    the first kiss between men was aired in 1987. The episode still provoked
    a storm of protest -- Bayerische Rundfunk refused to broadcast it a
    second time -- but this was already a rearguard action and the
    integration of gays (and lesbians) into the social mainstream continued.
    In 1993, the first gay and lesbian city festival took place in Berlin,
    and the first Rainbow Parade was held in Vienna in 1996. In 2002, the
    Cologne Pride Day involved 1.2 million participants and attendees, thus
    surpassing for the first time the attendance at the traditional Rose
    Monday parade. By the end of the 1990s, the sociologist Rüdiger Lautmann
    was already prepared to maintain: "To be homosexual has become
    increasingly normalized, even if homophobia lives on in the depths of
    the collective disposition."[^33^](#c1-note-0033){#c1-note-0033a} This
    normalization was also reflected in a study published by the Ministry of
    Justice in the year 2000, which stressed "the similarity between
    homosexual and heterosexual relationships" and, on this basis, made an
    argument against discrimination.[^34^](#c1-note-0034){#c1-note-0034a}
    Around the year 2000, however, the classical gay movement had already
    passed its peak. A profound transformation had begun to take place in
    the middle of the 1990s. It lost its character as a new social movement
    (in the style of the 1970s) and began to splinter inwardly and
    outwardly. One could say that it transformed from a mass movement into a
    multitude of variously networked communities. The clearest sign of this
    transformation is the abbreviation "LGBT" (lesbian, gay, bisexual, and
    transgender), which, since the mid-1990s, has represented the internal
    heterogeneity of the movement as it has shifted toward becoming a
    network.[^35^](#c1-note-0035){#c1-note-0035a} At this point, the more
    radical actors were already speaking against the normalization of
    homosexuality. Queer theory, for example, was calling into question the
    "essentialist" definition of gender []{#Page_27 type="pagebreak"
    title="27"}-- that is, any definition reducing it to an immutable
    essence -- with respect to both its physical dimension (sex) and its
    social and cultural dimension (gender
    proper).[^36^](#c1-note-0036){#c1-note-0036a} It thus opened up a space
    for the articulation of experiences, self-descriptions, and lifestyles
    that, on every level, are located beyond the classical attributions of
    men and women. A new generation of intellectuals, activists, and artists
    took the stage and developed -- yet again through acts of aesthetic
    self-empowerment -- a language that enabled them to import, with
    confidence, different self-definitions into the public sphere. An
    example of this is the adoption of inclusive plural forms in German
    (*Aktivist\_innen* "activists," *Künstler\_innen* "artists"), which draw
    attention to the gaps and possibilities between male and female
    identities that are also expressed in the language itself. Just as with
    the terms "gay" or *schwul* some 30 years before, in this case, too, an
    important element was the confident and public adoption and semantic
    conversion of a formerly insulting word ("queer") by the very people and
    communities against whom it used to be
    directed.[^37^](#c1-note-0037){#c1-note-0037a} Likewise observable in
    these developments was the simultaneity of social (amateur) and
    artistic/scientific (professional) cultural production. The goal,
    however, was less to produce a clear antithesis than it was to oppose
    rigid attributions by underscoring mutability, hybridity, and
    uniqueness. Both the scope of what could be expressed in public and the
    circle of potential speakers expanded yet again. And, at least to some
    extent, the drag queen Conchita Wurst popularized complex gender
    constructions that went beyond the simple woman/man dualism. All of that
    said, the assertion by Rüdiger Lautmann quoted above -- "homophobia
    lives on in the depths of the collective dis­position" -- continued to
    hold true.

    If the gay movement is representative of the social liber­ation of the
    1970s and 1980s, then it is possible to regard its transformation into
    the LGBT movement during the 1990s -- with its multiplicity and fluidity
    of identity models and its stress on mutability and hybridity -- as a
    sign of the reinvention of this project within the context of an
    increasingly dominant digital condition. With this transformation,
    however, the diversification and fluidification of cultural practices
    and social roles have not yet come to an end. Ways of life that were
    initially subcultural and facing existential pressure []{#Page_28
    type="pagebreak" title="28"}are gradually entering the mainstream. They
    are expanding the range of readily available models of identity for
    anyone who might be interested, be it with respect to family forms
    (e.g., patchwork families, adoption by same-sex couples), diets (e.g.,
    vegetarianism and veganism), healthcare (e.g., anti-vaccination), or
    other principles of life and belief. All of them are seeking public
    recognition for a new frame of reference for social meaning that has
    originated from their own activity. This is necessarily a process
    characterized by conflicts and various degrees of resistance, including
    right-wing populism that seeks to defend "traditional values," but many
    of these movements will ultimately succeed in providing more people with
    the opportunity to speak in public, thus broadening the palette of
    themes that are considered to be important and legitimate.
    :::

    ::: {.section}
    ### Beyond center and periphery {#c1-sec-0005}

    In order to reach a better understanding of the complexity involved in
    the expanding social basis of cultural production, it is necessary to
    shift yet again to a different level. For, just as it would be myopic to
    examine the multiplication of cultural producers only in terms of
    professional knowledge workers from the middle class, it would likewise
    be insufficient to situate this multiplication exclusively in the
    centers of the West. The entire system of categories that justified the
    differentiation between the cultural "center" and the cultural
    "periphery" has begun to falter. This complex and multilayered process
    has been formulated and analyzed by the theory of "post-colonialism."
    Long before digital media made the challenge of cultural multiplicity a
    quotidian issue in the West, proponents of this theory had developed
    languages and terminologies for negotiating different positions without
    needing to impose a hierarchical order.

    Since the 1970s, the theoretical current of post-colonialism has been
    examining the cultural and epistemic dimensions of colonialism that,
    even after its end as a territorial system, have remained responsible
    for the continuation of dependent relations and power differentials. For
    my purposes -- which are to develop a European perspective on the
    factors ensuring that more and more people are able to participate in
    cultural []{#Page_29 type="pagebreak" title="29"}production -- two
    points are especially relevant because their effects reverberate in
    Europe itself. First is the deconstruction of the categories "West" (in
    the sense of the center) and "East" (in the sense of the periphery). And
    second is the focus on hybridity as a specific way for non-Western
    actors to deal with the dominant cultures of former colonial powers,
    which have continued to determine significant portions of globalized
    culture. The terms "West" and "East," "center" and "periphery," do not
    simply describe existing conditions; rather, they are categories that
    contribute, in an important way, to the creation of the very conditions
    that they presume to describe. This may sound somewhat circular, but it
    is precisely from this circularity that such cultural classifications
    derive their strength. The world that they illuminate is immersed in
    their own light. The category "East" -- or, to use the term of the
    literary theorist Edward Said,
    "orientalism"[^38^](#c1-note-0038){#c1-note-0038a} -- is a system of
    representation that pervades Western thinking. Within this system,
    Europe or the West (as the center) and the East (as the periphery)
    represent asymmetrical and antithetical concepts. This construction
    achieves a dual effect. As a self-description, on the one hand, it
    contributes to the formation of our own identity, for Europeans
    attrib­ute to themselves and to their continent such features as
    "rationality," "order," and "progress," while on the other hand
    identifying the alternative with "superstition," "chaos," or
    "stagnation." The East, moreover, is used as an exotic projection screen
    for our own suppressed desires. According to Said, a representational
    system of this sort can only take effect if it becomes "hegemonic"; that
    is, if it is perceived as self-evident and no longer as an act of
    attribution but rather as one of description, even and precisely by
    those against whom the system discriminates. Said\'s accomplishment is
    to have worked out how far-reaching this system was and, in many areas,
    it remains so today. It extended (and extends) from scientific
    disciplines, whose researchers discussed (until the 1980s) the theory of
    "oriental despotism,"[^39^](#c1-note-0039){#c1-note-0039a} to literature
    and art -- the motif of the harem was especially popular, particularly
    in paintings of the late nineteenth
    century[^40^](#c1-note-0040){#c1-note-0040a} -- all the way to everyday
    culture, where, as of 1913 in the United States, the cigarette brand
    Camel (introduced to compete with the then-leading brand, Fatima) was
    meant to evoke the []{#Page_30 type="pagebreak" title="30"}mystique and
    sensuality of the Orient.[^41^](#c1-note-0041){#c1-note-0041a} This
    system of representation, however, was more than a means of describing
    oneself and others; it also served to legitimize the allocation of all
    knowledge and agency on to one side, that of the West. Such an order was
    not restricted to culture; it also created and legitimized a sense of
    domination for colonial projects.[^42^](#c1-note-0042){#c1-note-0042a}
    This cultural legitimation, as Said points out, also persists after the
    end of formal colonial domination and continues to marginalize the
    postcolonial subjects. As before, they are unable to speak for
    themselves and therefore remain in the dependent periphery, which is
    defined by their subordinate position in relation to the center. Said
    directed the focus of critique to this arrangement of center and
    periphery, which he saw as being (re)produced and legitimized on the
    cultural level. From this arose the demand that everyone should have the
    right to speak, to place him- or herself in the center. To achieve this,
    it was necessary first of all to develop a language -- indeed, a
    cultural landscape -- that can manage without a hegemonic center and is
    thus oriented toward multiplicity instead of
    uniformity.[^43^](#c1-note-0043){#c1-note-0043a}

    A somewhat different approach has been taken by the literary theorist
    Homi K. Bhabha. He proceeds from the idea that the colonized never fully
    passively adopt the culture of the colonialists -- the "English book,"
    as he calls it. Their previous culture is never simply wiped out and
    replaced by another. What always and necessarily occurs is rather a
    process of hybridization. This concept, according to Bhabha,

    ::: {.extract}
    suggests that all of culture is constructed around negotiations and
    conflicts. Every cultural practice involves an attempt -- sometimes
    good, sometimes bad -- to establish authority. Even classical works of
    art, such as a painting by Brueghel or a composition by Beethoven, are
    concerned with the establishment of cultural authority. Now, this poses
    the following question: How does one function as a negotiator when
    one\'s own sense of agency is limited, for instance, on account of being
    excluded or oppressed? I think that, even in the role of the underdog,
    there are opportunities to upend the imposed cultural authorities -- to
    accept some aspects while rejecting others. It is in this way that
    symbols of authority are hybridized and made into something of one\'s
    own. For me, hybridization is not simply a mixture but rather a
    []{#Page_31 type="pagebreak" title="31"}strategic and selective
    appropriation of meanings; it is a way to create space for negotiators
    whose freedom and equality are
    endangered.[^44^](#c1-note-0044){#c1-note-0044a}
    :::

    Hybridization is thus a cultural strategy for evading marginality that
    is imposed from the outside: subjects, who from the dominant perspective
    are incapable of doing so, appropriate certain aspects of culture for
    themselves and transform them into something else. What is decisive is
    that this hybrid, created by means of active and unauthorized
    appropriation, opposes the dominant version and the resulting speech is
    thus legitimized from another -- that is, from one\'s own -- position.
    In this way, a cultural engagement is set under way and the superiority
    of one meaning or another is called into question. Who has the right to
    determine how and why a relationship with others should be entered,
    which resources should be appropriated from them, and how these
    resources should be used? At the heart of the matter lie the abilities
    of speech and interpretation; these can be seized in order to create
    space for a "cultural hybridity that entertains difference without an
    assumed or imposed hierarchy."[^45^](#c1-note-0045){#c1-note-0045a}

    At issue is thus a strategy for breaking down hegemonic cultural
    conditions, which distribute agency in a highly uneven manner, and for
    turning one\'s own cultural production -- which has been dismissed by
    cultural authorities as flawed, misconceived, or outright ignorant --
    into something negotiable and independently valuable. Bhabha is thus
    interested in fissures, differences, diversity, multiplicity, and
    processes of negotiation that generate something like shared meaning --
    culture, as he defines it -- instead of conceiving of it as something
    that precedes these processes and is threatened by them. Accordingly, he
    proceeds not from the idea of unity, which is threatened whenever
    "others" are empowered to speak and needs to be preserved, but rather
    from the irreducible multiplicity that, through laborious processes, can
    be brought into temporary and limited consensus. Bhabha\'s vision of
    culture is one without immutable authorities, interpretations, and
    truths. In theory, everything can be brought to the table. This is not a
    situation in which anything goes, yet the central meaning of
    negotiation, the contextuality of consensus, and the mutability of every
    frame of reference []{#Page_32 type="pagebreak" title="32"}-- none of
    which can be shared equally by everyone -- are always potentially
    negotiable.

    Post-colonialism draws attention to the "disruptive power of the
    excluded-included third," which becomes especially virulent when it
    "emerges in the middle of semantic
    structures."[^46^](#c1-note-0046){#c1-note-0046a} The recognition of
    this power reveals the increasing cultural independence of those
    formerly colonized, and it also transforms the cultural self-perception
    of the West, for, even in Western nations that were not significant
    colonial powers, there are multifaceted tensions between dominant
    cultures and those who are on the defensive against discrimination and
    attributions by others. Instead of relying on the old recipe of
    integration through assimilation (that is, the dissolution of the
    "other"), the right to self-determined difference is being called for
    more emphatically. In such a manner, collective identities, such as
    national identities, are freed from their questionable appeals to
    cultural homogeneity and essentiality, and reconceived in terms of the
    experience of immanent difference. Instead of one binding and
    unnegotiable frame of reference for everyone, which hierarchizes
    individual pos­itions and makes them appear unified, a new order without
    such limitations needs to be established. Ultimately, the aim is to
    provide nothing less than an "alternative reading of
    modernity,"[^47^](#c1-note-0047){#c1-note-0047a} which influences both
    the construction of the past and the modalities of the future. For
    European culture in particular, such a project is an immense challenge.

    Of course, these demands do not derive their everyday relevance
    primarily from theory but rather from the experiences of
    (de)colonization, migration, and globalization. Multifaceted as it is,
    however, the theory does provide forms and languages for articulating
    these phenomena, legitimizing new positions in public debates, and
    attacking persistent mechanisms of cultural marginalization. It helps to
    empower broader societal groups to become actively involved in cultural
    processes, namely people, such as migrants and their children, whose
    identity and experience are essentially shaped by non-Western cultures.
    The latter have been giving voice to their experiences more frequently
    and with greater confidence in all areas of public life, be it in
    politics, literature, music, or
    art.[^48^](#c1-note-0048){#c1-note-0048a} In Germany, for instance, the
    films by Fatih Akin (*Head-On* from 2004 and *Soul Kitchen* from 2009,
    to []{#Page_33 type="pagebreak" title="33"}name just two), in which the
    experience of immigration is represented as part of the German
    experience, have reached a wide public audience. In 2002, the group
    Kanak Attak organized a series of conferences with the telling motto *no
    integración*, and these did much to introduce postcolonial positions to
    the debates taking place in German-speaking
    countries.[^49^](#c1-note-0049){#c1-note-0049a} For a long time,
    politicians with "migration backgrounds" were considered to be competent
    in only one area, namely integration policy. This has since changed,
    though not entirely. In 2008, for instance, Cem Özdemir was elected
    co-chair of the Green Party and thus shares responsibility for all of
    its political positions. Developments of this sort have been enabled
    (and strengthened) by a shift in society\'s self-perception. In 2014,
    Cemile Giousouf, the integration commissioner for the conservative
    CDU/CSU alliance in the German Parliament, was able to make the
    following statement without inciting any controversy: "Over the past few
    years, Germany has become a modern land of
    immigration."[^50^](#c1-note-0050){#c1-note-0050a} A remarkable
    proclamation. Not ten years earlier, her party colleague Norbert Lammert
    had expressed, in his function as parliamentary president, interest in
    reviving the debate about the term "leading culture." The increasingly
    well-educated migrants of the first, second, or third gener­ation no
    longer accept the choice of being either marginalized as an exotic
    representative of the "other" or entirely assimilated. Rather, they are
    insisting on being able to introduce their specific experience as a
    constitutive contribution to the formation of the present -- in
    association and in conflict with other contributions, but at the same
    level and with the same legitimacy. It is no surprise that various forms
    of discrimin­ation and violence against "foreigners" not only continue
    in everyday life but have also been increasing in reaction to this new
    situation. Ultimately, established claims to power are being called into
    question.

    To summarize, at least three secular historical tendencies or movements,
    some of which can be traced back to the late nineteenth century but each
    of which gained considerable momentum during the last third of the
    twentieth (the spread of the knowledge economy, the erosion of
    heteronormativity, and the focus of post-colonialism on cultural
    hybridity), have greatly expanded the sphere of those who actively
    negotiate []{#Page_34 type="pagebreak" title="34"}social meaning. In
    large part, the patterns and cultural foundations of these processes
    developed long before the internet. Through the use of the internet, and
    through the experiences of dealing with it, they have encroached upon
    far greater portions of all societies.
    :::
    :::

    ::: {.section}
    The Culturalization of the World {#c1-sec-0006}
    --------------------------------

    The number of participants in cultural processes, however, is not the
    only thing that has increased. Parallel to that development, the field
    of the cultural has expanded as well -- that is, those areas of life
    that are not simply characterized by unalterable necessities, but rather
    contain or generate competing options and thus require conscious
    decisions.

    The term "culturalization of the economy" refers to the central position
    of knowledge-based, meaning-based, and affect-oriented processes in the
    creation of value. With the emergence of consumption as the driving
    force behind the production of goods and the concomitant necessity of
    having not only to satisfy existing demands but also to create new ones,
    the cultural and affective dimensions of the economy began to gain
    significance. I have already discussed the beginnings of product
    staging, advertising, and public relations. In addition to all of the
    continuities that remain with us from that time, it is also possible to
    point out a number of major changes that consumer society has undergone
    since the late 1960s. These changes can be delineated by examining the
    greater role played by design, which has been called the "core
    discipline of the creative
    economy."[^51^](#c1-note-0051){#c1-note-0051a}

    As a field of its own, design originated alongside industrialization,
    when, in collaborative processes, the activities of planning and
    designing were separated from those of carrying out
    production.[^52^](#c1-note-0052){#c1-note-0052a} It was not until the
    modern era that designers consciously endeavored to seek new forms for
    the logic inherent to mass production. With the aim of economic
    efficiency, they intended their designs to optimize the clearly defined
    functions of anonymous and endlessly reproducible objects. At the end of
    the nineteenth century, the architect Louis Sullivan, whose buildings
    still distinguish the skyline of Chicago, condensed this new attitude
    into the famous axiom []{#Page_35 type="pagebreak" title="35"}"form
    follows function." Mies van der Rohe, working as an architect in Chicago
    in the middle of the twentieth century, supplemented this with a pithy
    and famous formulation of his own: "less is more." The rationality of
    design, in the sense of isolating and improving specific functions, and
    the economical use of resources were of chief importance to modern
    (industrial) designers. Even the ten design principles of Dieter Rams,
    who led the design division of the consumer products company Braun from
    1965 to 1991 -- one of the main sources of inspiration for Jonathan Ive,
    Apple\'s chief design officer -- aimed to make products "usable,"
    "understandable," "honest," and "long-lasting." "Good design," according
    to his guiding principle, "is as little design as
    possible."[^53^](#c1-note-0053){#c1-note-0053a} This orientation toward
    the technical and functional promised to solve problems for everyone in
    a long-term and binding manner, for the inherent material and design
    qual­ities of an object were supposed to make it independent from
    changing times and from the tastes of consumers.

    ::: {.section}
    ### Beyond the object {#c1-sec-0007}

    At the end of the 1960s, a new generation of designers rebelled against
    this industrial and instrumental rationality, which was now felt to be
    authoritarian, soulless, and reductionist. In the works associated with
    "anti-design" or "radical design," the objectives of the discipline were
    redefined and a new formal language was developed. In the place of
    tech­nical and functional optimization, recombination -- ecological
    recycling or the postmodern interplay of forms -- emerged as a design
    method and aesthetic strategy. Moreover, the aspiration of design
    shifted from the individual object to its entire social and material
    environment. The processes of design and production, which had been
    closed off from one another and restricted to specialists, were opened
    up precisely to encourage the participation of non-designers, be it
    through interdisciplinary cooperation with other types of professions or
    through the empowerment of laymen. The objectives of design were
    radically expanded: rather than ending with the completion of an
    individual product, it was now supposed to engage with society. In the
    sense of cybernetics, this was regarded as a "system," controlled by
    feedback processes, []{#Page_36 type="pagebreak" title="36"}which
    connected social, technical, and biological dimensions to one
    another.[^54^](#c1-note-0054){#c1-note-0054a} Design, according to this
    new approach, was meant to be a "socially significant
    activity."[^55^](#c1-note-0055){#c1-note-0055a}

    Embedded in the social movements of the 1960s and 1970s, this new
    generation of designers was curious about the social and political
    potential of their discipline, and about possibilities for promoting
    flexibility and autonomy instead of rigid industrial efficiency. Design
    was no longer expected to solve problems once and for all, for such an
    idea did not correspond to the self-perception of an open and mutable
    society. Rather, it was expected to offer better opportun­ities for
    enabling people to react to continuously changing conditions. A radical
    proposal was developed by the Italian designer Enzo Mari, who in 1974
    published his handbook *Autoprogettazione* (Self-Design). It contained
    19 simple designs with which people could make, on their own,
    aesthetically and functionally sophisticated furniture out of pre-cut
    pieces of wood. In this case, the designs themselves were less important
    than the critique of conventional design as elitist and of consumer
    society as alienated and wasteful. Mari\'s aim was to reconceive the
    relations among designers, the manufacturing industry, and users.
    Increasingly, design came to be understood as a holistic and open
    process. Victor Papanek, the founder of ecological design, took things a
    step further. For him, design was "basic to all human activity. The
    planning and patterning of any act towards a desired, foreseeable end
    constitutes the design process. Any attempt to separate design, to make
    it a thing-by-itself, works counter to the inherent value of design as
    the primary underlying matrix of
    life."[^56^](#c1-note-0056){#c1-note-0056a}

    Potentially all aspects of life could therefore fall under the purview
    of design. This came about from the desire to oppose industrialism,
    which was blind to its catastrophic social and ecological consequences,
    with a new and comprehensive manner of seeing and acting that was
    unrestricted by economics.

    Toward the end of the 1970s, this expanded notion of design owed less
    and less to emancipatory social movements, and its socio-political goals
    began to fall by the wayside. Three fundamental patterns survived,
    however, which go beyond design and remain characteristic of the
    culturalization []{#Page_37 type="pagebreak" title="37"}of the economy:
    the discovery of the public as emancipated users and active
    participants; the use of appropriation, transformation, and
    recombination as methods for creating ever-new aesthetic
    differentiations; and, finally, the intention of shaping the lifeworld
    of the user.[^57^](#c1-note-0057){#c1-note-0057a}

    As these patterns became depoliticized and commercialized, the focus of
    designing the "lifeworld" shifted more and more toward designing the
    "experiential world." By the end of the 1990s, this had become so
    normalized that even management consultants could assert that
    "\[e\]xperiences represent an existing but previously unarticulated
    *genre of economic output*."[^58^](#c1-note-0058){#c1-note-0058a} It was
    possible to define the dimensions of the experiential world in various
    ways. For instance, it could be clearly delimited and product-oriented,
    like the flagship stores introduced by Nike in 1990, which, with their
    elaborate displays, were meant to turn shopping into an experience. This
    experience, as the company\'s executives hoped, radiated outward and
    influenced how the brand was perceived as a whole. The experiential
    world could also, however, be conceived in somewhat broader terms, for
    instance by design­ing entire institutions around the idea of creating a
    more attractive work environment and thereby increasing the commitment
    of employees. This approach is widespread today in creative industries
    and has become popularized through countless stories about ping-pong
    tables, gourmet cafeterias, and massage rooms in certain offices. In
    this case, the process of creativity is applied back to itself in order
    to systematize and optimize a given workplace\'s basis of operation. The
    development is comparable to the "invention of invention" that
    characterized industrial research around the end of the nineteenth
    century, though now the concept has been re­located to the field of
    knowledge production.

    Yet the "experiential world" can be expanded even further, for instance
    when entire cities attempt to make themselves attractive to
    international clientele and compete with others by building spectacular
    museums or sporting arenas. Displays in cities, as well as a few other
    central locations, are regularly constructed in order to produce a
    particular experience. This also entails, however, that certain forms of
    use that fail to fit the "urban
    script"[^59^](#c1-note-0059){#c1-note-0059a} are pushed to the margins
    or driven away.[^60^](#c1-note-0060){#c1-note-0060a} Thus, today, there
    is hardly a single area of life to []{#Page_38 type="pagebreak"
    title="38"}which the strategies and methods of design do not have
    access, and this access occurs at all levels. For some time, design has
    not been a purely visible matter, restricted to material objects; it
    rather forms and controls all of the senses. Cities, for example, have
    come to be understood increasingly as "sound spaces" and have
    accordingly been reconfigured with the goal of modulating their various
    noises.[^61^](#c1-note-0061){#c1-note-0061a} Yet design is no longer
    just a matter of objects, processes, and experiences. By now, in the
    context of reproductive medicine, it has even been applied to the
    biological foundations of life ("designer babies"). I will revisit this
    topic below.
    :::

    ::: {.section}
    ### Culture everywhere {#c1-sec-0008}

    Of course, design is not the only field of culture that has imposed
    itself over society as a whole. A similar development has occurred in
    the field of advertising, which, since the 1970s, has been integrated
    into many more physical and social spaces and by now has a broad range
    of methods at its disposal. Advertising is no longer found simply on
    billboards or in display windows. In the form of "guerilla marketing" or
    "product placement," it has penetrated every space and occupied every
    discourse -- by blending with political messages, for instance -- and
    can now even be spread, as "viral marketing," by the addressees of the
    advertisements themselves. Similar processes can be observed in the
    fields of art, fashion, music, theater, and sports. This has taken place
    perhaps most radically in the field of "gaming," which has drawn upon
    technical progress in the most direct possible manner and, with the
    spread of powerful computers and mobile applications, has left behind
    the confines of the traditional playing field. In alternate reality
    games, the realm of the virtual and fictitious has also been
    transcended, as physical spaces have been overlaid with their various
    scripts.[^62^](#c1-note-0062){#c1-note-0062a}

    This list could be extended, but the basic trend is clear enough,
    especially as the individual fields overlap and mutually influence one
    another. They are blending into a single interdependent field for
    generating social meaning in the form of economic activity. Moreover,
    through digitalization and networking, many new opportunities have
    arisen for large-scale involvement by the public in design processes.
    Thanks []{#Page_39 type="pagebreak" title="39"}to new communication
    technologies and flexible production processes, today\'s users can
    personalize and create products to suit their wishes. Here, the spectrum
    extends from tiny batches of creative-industrial products all the way to
    global processes of "mass customization," in which factory-based mass
    production is combined with personalization. One of the first
    applications of this was introduced in 1999 when, through its website, a
    sporting-goods company allowed customers to design certain elements of a
    shoe by altering it within a set of guidelines. This was taken a step
    further by the idea of "user-centered innovation," which relies on the
    specific knowledge of users to enhance a product, with the additional
    hope of discovering unintended applications and transforming these into
    new areas of business.[^63^](#c1-note-0063){#c1-note-0063a} It has also
    become possible for end users to take over the design process from the
    beginning, which has become considerably easier with the advent of
    specialized platforms for exchanging knowledge, alongside semi-automated
    production tools such as mechanical mills and 3D printers.
    Digitalization, which has allowed all content to be processed, and
    networking, which has created an endless amount of content ("raw
    material"), have turned appropriation and recombination into general
    methods of cultural production.[^64^](#c1-note-0064){#c1-note-0064a}
    This phenomenon will be examined more closely in the next chapter.

    Both the involvement of users in the production process and the methods
    of appropriation and recombination are extremely information-intensive
    and communication-intensive. Without the corresponding technological
    infrastructure, neither could be achieved efficiently or on a large
    scale. This was evident in the 1970s, when such approaches never made it
    beyond subcultures and conceptual studies. With today\'s search engines,
    every single user can trawl through an amount of information that, just
    a generation ago, would have been unmanageable even by professional
    archivists. A broad array of communication platforms (together with
    flexible production capacities and efficient logistics) not only weakens
    the contradiction between mass fabrication and personalization; it also
    allows users to network directly with one another in order to develop
    specialized knowledge together and thus to enable themselves to
    intervene directly in design processes, both as []{#Page_40
    type="pagebreak" title="40"}willing participants in and as critics of
    flexible global production processes.
    :::
    :::

    ::: {.section}
    The Technologization of Culture {#c1-sec-0009}
    -------------------------------

    That society is dependent on complex information technologies in order
    to organize its constitutive processes is, in itself, nothing new.
    Rather, this began as early as the late nineteenth century. It is
    directly correlated with the expansion and acceleration of the
    circulation of goods, which came about through industrialization. As the
    historian and sociologist James Beniger has noted, this led to a
    "control crisis," for administrative control centers were faced with the
    problem of losing sight of what was happening in their own factories,
    with their suppliers, and in the important markets of the time.
    Management was in a bind: decisions had to be made either on the basis
    of insufficient information or too late. The existing administrative and
    control mechanisms could no longer deal with the rapidly increasing
    complexity and time-sensitive nature of extensively organized production
    and distribution. The office became more important, and ever more people
    were needed there to fulfill a growing number of functions. Yet this was
    not enough for the crisis to subside. The old administrative methods,
    which involved manual information processing, simply could no longer
    keep up. The crisis reached its first dramatic peak in 1889 in the
    United States, with the realization that the census data from the year
    1880 had not yet been analyzed when the next census was already
    scheduled to take place during the subsequent year. In the same year,
    the Secretary of the Interior organized a conference to investigate
    faster methods of data processing. Two methods were tested for making
    manual labor more efficient, one of which had the potential to achieve
    greater efficiency by means of novel data-processing machines. The
    latter system emerged as the clear victor; developed by an engineer
    named Hermann Hollerith, it mechanically processed and stored data on
    punch cards. The idea was based on Hollerith\'s observations of the
    coup­ling and decoupling of railroad cars, which he interpreted as
    modular units that could be combined in any desired order. The punch
    card transferred this approach to information []{#Page_41
    type="pagebreak" title="41"}management. Data were no longer stored in
    fixed, linear arrangements (tables and lists) but rather in small units
    (the punch cards) that, like railroad cars, could be combined in any
    given way. The increase in efficiency -- with respect to speed *and*
    flexibility -- was enormous, and nearly a hundred of Hollerith\'s
    machines were used by the Census
    Bureau.[^65^](#c1-note-0065){#c1-note-0065a} This marked a turning point
    in the history of information processing, with technical means no longer
    being used exclusively to store data, but to process data as well. This
    was the only way to avoid the impending crisis, ensuring that
    bureaucratic management could maintain centralized control. Hollerith\'s
    machines proved to be a resounding success and were implemented in many
    more branches of government and corporate administration, where
    data-intensive processes had increased so rapidly they could not have
    been managed without such machines. This growth was accompanied by that
    of Hollerith\'s Tabulating Machine Company, which he founded in 1896 and
    which, after a number of mergers, was renamed in 1924 as the
    International Business Machines Corporation (IBM). Throughout the
    following decades, dependence on information-processing machines only
    deepened. The growing number of social, commercial, and military
    processes could only be managed by means of information technology. This
    largely took place, however, outside of public view, namely in the
    specialized divisions of large government and private organizations.
    These were the only institutions in command of the necessary resources
    for operating the complex technical infrastructure -- so-called
    mainframe computers -- that was essential to automatic information
    processing.

    ::: {.section}
    ### The independent media {#c1-sec-0010}

    As with so much else, this situation began to change in the 1960s. Mass
    media and information-processing technologies began to attract
    criticism, even though all of the involved subcultures, media activists,
    and hackers continued to act independently from one another until the
    1990s. The freedom-oriented social movements of the 1960s began to view
    the mass media as part of the political system against which they were
    struggling. The connections among the economy, politics, and the media
    were becoming more apparent, not []{#Page_42 type="pagebreak"
    title="42"}least because many mass media companies, especially those in
    Germany related to the Springer publishing house, were openly inimical
    to these social movements. Critical theor­ies arose that, borrowing
    Louis Althusser\'s influential term, regarded the media as part of the
    "ideological state apparatus"; that is, as one of the authorities whose
    task is to influence people to accept social relations to such a degree
    that the "repressive state apparatuses" (the police, the military, etc.)
    form a constant background in everyday
    life.[^66^](#c1-note-0066){#c1-note-0066a} Similarly influential,
    Antonio Gramsci\'s theory of "cultural hegemony" emphasized the
    condition in which the governed are manipulated to form a cultural
    consensus with the ruling class; they accept the latter\'s
    presuppositions (and the politics which are thus justified) even though,
    by doing so, they are forced to suffer economic
    disadvantages.[^67^](#c1-note-0067){#c1-note-0067a} Guy Debord and the
    Situationists attributed to the media a central role in the new form of
    rule known as "the spectacle," the glittery surfaces and superficial
    manifestations of which served to conceal society\'s true
    relations.[^68^](#c1-note-0068){#c1-note-0068a} In doing so, they
    aligned themselves with the critique of the "culture industry," which
    had been formulated by Max Horkheimer and Theodor W. Adorno at the
    beginning of the 1940s and had become a widely discussed key text by the
    1960s.

    Their differences aside, these perspectives were united in that they no
    longer understood the "public" as a neutral sphere, in which citizens
    could inform themselves freely and form their opinions, but rather as
    something that was created with specific intentions and consequences.
    From this grew an interest in "counter-publics"; that is, in forums
    where other actors could appear and negotiate theories of their own. The
    mass media thus became an important instrument for organizing the
    bourgeois--capitalist public, but they were also responsible for the
    development of alternatives. Media, according to one of the core ideas
    of these new approaches, are less a sphere in which an external reality
    is depicted; rather, they are themselves a constitutive element of
    reality.
    :::

    ::: {.section}
    ### Media as lifeworlds {#c1-sec-0011}

    Another branch of new media theories, that of Marshall McLuhan and the
    Toronto School of Communication,[^69^](#c1-note-0069){#c1-note-0069a}
    []{#Page_43 type="pagebreak" title="43"}reached a similar conclusion on
    different grounds. In 1964, McLuhan aroused a great deal of attention
    with his slogan "the medium is the message." He maintained that every
    medium of communication, by means of its media-specific characteristics,
    directly affected the consciousness, self-perception, and worldview of
    every individual.[^70^](#c1-note-0070){#c1-note-0070a} This, he
    believed, happens independently of and in addition to whatever specific
    message a medium might be conveying. From this perspective, reality does
    not exist outside of media, given that media codetermine our personal
    relation to and behavior in the world. For McLuhan and the Toronto
    School, media were thus not channels for transporting content but rather
    the all-encompassing environments -- galaxies -- in which we live.

    Such ideas were circulating much earlier and were intensively developed
    by artists, many of whom were beginning to experiment with new
    electronic media. An important starting point in this regard was the
    1963 exhibit *Exposition of Music -- Electronic Television* by the
    Korean artist Nam June Paik, who was then collaborating with Karlheinz
    Stockhausen in Düsseldorf. Among other things, Paik presented 12
    television sets, the screens of which were "distorted" by magnets. Here,
    however, "distorted" is a problematic term, for, as Paik explicitly
    noted, the electronic images were "a beautiful slap in the face of
    classic dualism in philosophy since the time of Plato. \[...\] Essence
    AND existence, essentia AND existentia. In the case of the electron,
    however, EXISTENTIA IS ESSENTIA."[^71^](#c1-note-0071){#c1-note-0071a}
    Paik no longer understood the electronic image on the television screen
    as a portrayal or representation of anything. Rather, it engendered in
    the moment of its appearance an autonomous reality beyond and
    independent of its representational function. A whole generation of
    artists began to explore forms of existence in electronic media, which
    they no longer understood as pure media of information. In his work
    *Video Corridor* (1969--70), Bruce Nauman stacked two monitors at the
    end of a corridor that was approximately 10 meters long but only 50
    centimeters wide. On the lower monitor ran a video showing the empty
    hallway. The upper monitor displayed an image captured by a camera
    installed at the entrance of the hall, about 3 meters high. If the
    viewer moved down the corridor toward the two []{#Page_44
    type="pagebreak" title="44"}monitors, he or she would thus be recorded
    by the latter camera. Yet the closer one came to the monitor, the
    farther one would be from the camera, so that one\'s image on the
    monitor would become smaller and smaller. Recorded from behind, viewers
    would thus watch themselves walking away from themselves. Surveillance
    by others, self-surveillance, recording, and disappearance were directly
    and intuitively connected with one another and thematized as fundamental
    issues of electronic media.

    Toward the end of the 1960s, the easier availability and mobility of
    analog electronic production technologies promoted the search for
    counter-publics and the exploration of media as comprehensive
    lifeworlds. In 1967, Sony introduced its first Portapak system: a
    battery-powered, self-contained recording system -- consisting of a
    camera, a cord, and a recorder -- with which it was possible to make
    (black-and-white) video recordings outside of a studio. Although the
    recording apparatus, which required additional devices for editing and
    projection, was offered at the relatively expensive price of \$1,500
    (which corresponds to about €8,000 today), it was still affordable for
    interested groups. Compared with the situation of traditional film
    cameras, these new cameras considerably lowered the initial hurdle for
    media production, for video tapes were not only much cheaper than film
    reels (and could be used for multiple recordings); they also made it
    possible to view recorded material immediately and on location. This
    enabled the production of works that were far more intuitive and
    spontaneous than earlier ones. The 1970s saw the formation of many video
    groups, media workshops, and other initiatives for the independent
    production of electronic media. Through their own distribution,
    festivals, and other channels, such groups created alternative public
    spheres. The latter became especially prominent in the United States
    where, at the end of the 1960s, the providers of cable networks were
    legally obligated to establish public-access channels, on which citizens
    were able to operate self-organized and non-commercial television
    programs. This gave rise to a considerable public-access movement there,
    which at one point extended across 4,000 cities and was responsible for
    producing programs from and for these different
    communities.[^72[]{#Page_45 type="pagebreak"
    title="45"}^](#c1-note-0072){#c1-note-0072a}

    What these initiatives shared in common, in Western Europe and the
    United States, was their attempt to close the gap between the
    consumption and production of media, to activate the public, and at
    least in part to experiment with the media themselves. Non-professional
    producers were empowered with the ability to control who told their
    stories and how this happened. Groups that previously had no access to
    the medial public sphere now had opportunities to represent themselves
    and their own interests. By working together on their own productions,
    such groups demystified the medium of television and simultaneously
    equipped it with a critical consciousness.

    Especially well received in Germany was the work of Hans Magnus
    Enzensberger, who in 1970 argued (on the basis of Bertolt Brecht\'s
    radio theory) in favor of distinguishing between "repressive" and
    "emancipatory" uses of media. For him, the emancipatory potential of
    media lay in the fact that "every receiver is \[...\] a potential
    transmitter" that can participate "interactively" in "collective
    production."[^73^](#c1-note-0073){#c1-note-0073a} In the same year, the
    first German video group, Telewissen, debuted in public with a
    demonstration in downtown Darmstadt. In 1980, at the peak of the
    movement for independent video production, there were approximately a
    hundred such groups throughout (West) Germany. The lack of distribution
    channels, however, represented a nearly insuperable obstacle and ensured
    that many independent productions were seldom viewed outside of
    small-scale settings. Tapes had to be exchanged between groups through
    the mail, and they were mainly shown at gatherings and events, and in
    bars. The dynamic of alternative media shifted toward a small subculture
    (though one networked throughout all of Europe) of pirate radio and
    television broadcasters. At the beginning of the 1980s and in the space
    of Radio Dreyeckland in Freiburg, which had been founded in 1977 as
    Radio Verte Fessenheim, operations began at Germany\'s first pirate or
    citizens\' radio station, which regularly broadcast information about
    the political protest movements that had arisen against the use of
    nuclear power in Fessenheim (France), Wyhl (Germany), and Kaiseraugst
    (Switzerland). The epicenter of the scene, however, was located in
    Amsterdam, where the group known as Rabotnik TV, which was an offshoot
    []{#Page_46 type="pagebreak" title="46"}of the squatter scene there,
    would illegally feed its signal through official television stations
    after their programming had ended at night (many stations then stopped
    broadcasting at midnight). In 1988, the group acquired legal
    broadcasting slots on the cable network and reached up to 50,000 viewers
    with their weekly experimental shows, which largely consisted of footage
    appropriated freely from elsewhere.[^74^](#c1-note-0074){#c1-note-0074a}
    Early in 1990, the pirate television station Kanal X was created in
    Leipzig; it produced its own citizens\' television programming in the
    quasi-lawless milieu of the GDR before
    reunification.[^75^](#c1-note-0075){#c1-note-0075a}

    These illegal, independent, or public-access stations only managed to
    establish themselves as real mass media to a very limited extent.
    Nevertheless, they played an important role in sensitizing an entire
    generation of media activists, whose opportunities expanded as the means
    of production became both better and cheaper. In the name of "tactical
    media," a new generation of artistic and political media activists came
    together in the middle of the
    1990s.[^76^](#c1-note-0076){#c1-note-0076a} They combined the "camcorder
    revolution," which in the late 1980s had made video equipment available
    to broader swaths of society, stirring visions of democratic media
    production, with the newly arrived medium of the internet. Despite still
    struggling with numerous technical difficulties, they remained constant
    in their belief that the internet would solve the hitherto intractable
    problem of distributing content. The transition from analog to digital
    media lowered the production hurdle yet again, not least through the
    ongoing development of improved software. Now, many stages of production
    that had previously required professional or semi-professional expertise
    and equipment could also be carried out by engaged laymen. As a
    consequence, the focus of interest broadened to include not only the
    development of alternative production groups but also the possibility of
    a flexible means of rapid intervention in existing structures. Media --
    both television and the internet -- were understood as environments in
    which one could act without directly representing a reality outside of
    the media. Television was analyzed down to its own legalities, which
    could then be manipulated to affect things beyond the media.
    Increasingly, culture jamming and the campaigns of so-called
    communication guerrillas were blurring the difference between media and
    political activity.[^77[]{#Page_47 type="pagebreak"
    title="47"}^](#c1-note-0077){#c1-note-0077a}

    This difference was dissolved entirely by a new generation of
    politically motivated artists, activists, and hackers, who transferred
    the tactics of civil disobedience -- blockading a building with a
    sit-in, for instance -- to the
    internet.[^78^](#c1-note-0078){#c1-note-0078a} When, in 1994, the
    Zapatista Army of National Liberation rose up in the south of Mexico,
    several media projects were created to support its mostly peaceful
    opposition and to make the movement known in Europe and North America.
    As part of this loose network, in 1998 the American artist collective
    Electronic Disturbance Theater developed a relatively simple computer
    program called FloodNet that enabled networked sympathizers to shut down
    websites, such as those of the Mexican government, in a targeted and
    temporary manner. The principle was easy enough: the program would
    automatic­ally reload a certain website over and over again in order to
    exhaust the capacities of its network
    servers.[^79^](#c1-note-0079){#c1-note-0079a} The goal was not to
    destroy data but rather to disturb the normal functioning of an
    institution in order to draw attention to the activities and interests
    of the protesters.
    :::

    ::: {.section}
    ### Networks as places of action {#c1-sec-0012}

    What this new generation of media activists shared in common with the
    hackers and pioneers of computer networks was the idea that
    communication media are spaces for agency. During the 1960s, these
    programmers were also in search of alternatives. The difference during
    the 1960s is that they did not pursue these alternatives in
    counter-publics, but rather in alternative lifestyles and communication.
    The rejection of bureaucracy as a form of social organization played a
    significant role in the critique of industrial society formulated by
    freedom-oriented social movements. At the beginning of the previous
    century, Max Weber had still regarded bureaucracy as a clear sign of
    progress toward a rational and method­ical
    organization.[^80^](#c1-note-0080){#c1-note-0080a} He based this
    assessment on processes that were impersonal, rule-bound, and
    transparent (in the sense that they were documented with files). But
    now, in the 1960s, bureaucracy was being criticized as soulless,
    alienated, oppressive, non-transparent, and unfit for an increasingly
    complex society. Whereas the first four of these points are in basic
    agreement with Weber\'s thesis about "disenchanting" []{#Page_48
    type="pagebreak" title="48"}the world, the last point represents a
    radical departure from his analysis. Bureaucracies were no longer
    regarded as hyper-efficient but rather as inefficient, and their size
    and rule-bound nature were no longer seen as strengths but rather as
    decisive weaknesses. The social bargain of offering prosperity and
    security in exchange for subordination to hierarchical relations struck
    many as being anything but attractive, and what blossomed instead was a
    broad interest in alternative forms of coexistence. New institutions
    were expected to be more flexible and more open. The desire to step away
    from the system was widespread, and many (mostly young) people set about
    doing exactly that. Alternative ways of life -- communes, shared
    apartments, and cooperatives -- were explored in the country and in
    cities. They were meant to provide the individual with greater autonomy
    and the opportunity to develop his or her own unique potential. Despite
    all of the differences between these concepts of life, they nevertheless
    shared something of a common denominator: the promise of
    reconceptualizing social institutions and the fundamentals of
    coexistence, with the aim of reformulating them in such a way as to
    allow everyone\'s personal potential to develop fully in the here and
    now.

    According to critics of such alternatives, bureaucracy was necessary in
    order to organize social life as it radically reduced the world\'s
    complexity by forcing it through the bottleneck of official procedures.
    However, the price paid for such efficiency involved the atrophying of
    human relationships, which had to be subordinated to rigid processes
    that were incapable of registering unique characteristics and
    differences and were unable to react in a timely manner to changing
    circumstances.

    In the 1960s, many countercultural attempts to find new forms of
    organization placed personal and open communication at the center of
    their efforts. Each individual was understood as a singular person with
    untapped potential rather than a carrier of abstract and clearly defined
    functions. It was soon realized, however, that every common activity and
    every common decision entailed processes that were time-intensive and
    communication-intensive. As soon as a group exceeded a certain size, it
    became practically impossible for it to reach any consensus. As a result
    of these experiences, an entire worldview emerged that propagated
    "smallness" as a central []{#Page_49 type="pagebreak" title="49"}value
    ("small is beautiful"). It was thought that in this way society might
    escape from bureaucracy with its ostensibly disastrous consequences for
    humanity and the environment.[^81^](#c1-note-0081){#c1-note-0081a} But
    this belief did not last for long. For, unlike the majority of European
    alternative movements, the counterculture in the United States was not
    overwhelmingly critical of technology. On the contrary, many actors
    there sought suitable technologies for solving the practical problems of
    social organization. At the end of the 1960s, a considerable amount of
    attention was devoted to the field of basic technological research. This
    field brought together the interests of the military, academics,
    businesses, and activists from the counterculture. The common ground for
    all of them was a cybernetic vision of institutions, or, in the words of
    the historian Fred Turner:

    ::: {.extract}
    a picture of humans and machines as dynamic, collaborating elements in a
    single, highly fluid, socio-technical system. Within that system,
    control emerged not from the mind of a commanding officer, but from the
    complex, probabilistic interactions of humans, machines and events
    around them. Moreover, the mechanical elements of the system in question
    -- in this case, the predictor -- enabled the human elements to achieve
    what all Americans would agree was a worthwhile goal. \[...\] Over the
    coming decades, this second vision of benevolent man-machine systems, of
    circular flows of information, would emerge as a driving force in the
    establishment of the military--industrial--academic complex and as a
    model of an alternative to that
    complex.[^82^](#c1-note-0082){#c1-note-0082a}
    :::

    This complex was possible because, as a theory, cybernetics was
    formulated in extraordinarily abstract terms, so much so that a whole
    variety of competing visions could be associated with
    it.[^83^](#c1-note-0083){#c1-note-0083a} With cybernetics as a
    meta-science, it was possible to investigate the common features of
    technical, social, and biological
    processes.[^84^](#c1-note-0084){#c1-note-0084a} They were analyzed as
    open, interactive, and information-processing systems. It was especially
    consequential that cybernetics defined control and communication as the
    same thing, namely as activities oriented toward informational
    feedback.[^85^](#c1-note-0085){#c1-note-0085a} The heterogeneous legacy
    of cybernetics and its synonymous treatment of the terms "communication"
    and "control" continue to influence information technology and the
    internet today.[]{#Page_50 type="pagebreak" title="50"}

    The various actors who contributed to the development of the internet
    shared a common interest for forms of organ­ization based on the
    comprehensive, dynamic, and open exchange of information. Both on the
    micro and macro level (and this is decisive at this point),
    decentralized and flexible communication technologies were meant to
    become the foundation of new organizational models. Militaries feared
    attacks on their command and communication centers; academics wanted to
    broaden their culture of autonomy, collaboration among peers, and the
    free exchange of information; businesses were looking for new areas of
    activity; and countercultural activists were longing for new forms of
    peaceful coexistence.[^86^](#c1-note-0086){#c1-note-0086a} They all
    rejected the bureaucratic model, and the counterculture provided them
    with the central catchword for their alternative vision: community.
    Though rather difficult to define, it was a powerful and positive term
    that somehow promised the opposite of bureaucracy: humanity,
    cooperation, horizontality, mutual trust, and consensus. Now, however,
    humanity was expected to be reconfigured as a community in cooperation
    with and inseparable from machines. And what was yearned for had become
    a liberating symbiosis of man and machine, an idea that the author
    Richard Brautigan was quick to mock in his poem "All Watched Over by
    Machines of Loving Grace" from 1967:

    ::: {.poem}
    ::: {.lineGroup}
    I like to think (and

    the sooner the better!)

    of a cybernetic meadow

    where mammals and computers

    live together in mutually

    programming harmony

    like pure water

    touching clear sky.[^87^](#c1-note-0087){#c1-note-0087a}
    :::
    :::

    Here, Brautigan is ridiculing both the impatience (*the sooner the
    better!*) and the naïve optimism (*harmony, clear sky*) of the
    countercultural activists. Primarily, he regarded the underlying vision
    as an innocent but amusing fantasy and not as a potential threat against
    which something had to be done. And there were also reasons to believe
    that, ultimately, the new communities would be free from the coercive
    nature that []{#Page_51 type="pagebreak" title="51"}had traditionally
    characterized the downside of community experiences. It was thought that
    the autonomy and freedom of the individual could be regained in and by
    means of the community. The conditions for this were that participation
    in the community had to be voluntary and that the rules of participation
    had to be self-imposed. I will return to this topic in greater detail
    below.

    In line with their solution-oriented engineering culture and the
    results-focused military funders who by and large set the agenda, a
    relatively small group of computer scientists now took it upon
    themselves to establish the technological foundations for new
    institutions. This was not an abstract goal for the distant future;
    rather, they wanted to change everyday practices as soon as possible. It
    was around this time that advanced technology became the basis of social
    communication, which now adopted forms that would have been
    inconceivable (not to mention impracticable) without these
    preconditions. Of course, effective communication technologies already
    existed at the time. Large corporations had begun long before then to
    operate their own computing centers. In contrast to the latter, however,
    the new infrastructure could also be used by individuals outside of
    established institutions and could be implemented for all forms of
    communication and exchange. This idea gave rise to a pragmatic culture
    of horizontal, voluntary cooperation. The clearest summary of this early
    ethos -- which originated at the unusual intersection of military,
    academic, and countercultural interests -- was offered by David D.
    Clark, a computer scientist who for some time coordinated the
    development of technical standards for the internet: "We reject: kings,
    presidents and voting. We believe in: rough consensus and running
    code."[^88^](#c1-note-0088){#c1-note-0088a}

    All forms of classical, formal hierarchies and their methods for
    resolving conflicts -- commands (by kings and presidents) and votes --
    were dismissed. Implemented in their place was a pragmatics of open
    cooperation that was oriented around two guiding principles. The first
    was that different views should be discussed without a single individual
    being able to block any final decisions. Such was the meaning of the
    expression "rough consensus." The second was that, in accordance with
    the classical engineering tradition, the focus should remain on concrete
    solutions that had to be measured against one []{#Page_52
    type="pagebreak" title="52"}another on the basis of transparent
    criteria. Such was the meaning of the expression "running code." In
    large part, this method was possible because the group oriented around
    these principles was, internally, relatively homogeneous: it consisted
    of top-notch computer scientists -- all of them men -- at respected
    American universities and research centers. For this very reason, many
    potential and fundamental conflicts were avoided, at least at first.
    This internal homogeneity lends rather dark undertones to their sunny
    vision, but this was hardly recognized at the time. Today these
    undertones are far more apparent, and I will return to them below.

    Not only were technical protocols developed on the basis of these
    principles, but organizational forms as well. Along with the Internet
    Engineering Task Force (which he directed), Clark created the so-called
    Request-for-Comments documents, with which ideas could be presented to
    interested members of the community and simultaneous feedback could be
    collected in order to work through the ideas in question and thus reach
    a rough consensus. If such a consensus could not be reached -- if, for
    instance, an idea failed to resonate with anyone or was too
    controversial -- then the matter would be dropped. The feedback was
    organized as a form of many-to-many communication through email lists,
    newsgroups, and online chat systems. This proved to be so effective that
    horizontal communication within large groups or between multiple groups
    could take place without resulting in chaos. This therefore invalidated
    the traditional trend that social units, once they reach a certain size,
    would necessarily introduce hierarchical structures for the sake of
    reducing complexity and communication. In other words, the foundations
    were laid for larger numbers of (changing) people to organize flexibly
    and with the aim of building an open consensus. For Manuel Castells,
    this combination of organizational flexibility and scalability in size
    is the decisive innovation that was enabled by the rise of the network
    society.[^89^](#c1-note-0089){#c1-note-0089a} At the same time, however,
    this meant that forms of organization spread that could only be possible
    on the basis of technologies that have formed (and continue to form)
    part of the infrastructure of the internet. Digital technology and the
    social activity of individual users were linked together to an
    unprecedented extent. Social and cultural agendas were now directly
    related []{#Page_53 type="pagebreak" title="53"}to and entangled with
    technical design. Each of the four original interest groups -- the
    military, scientists, businesses, and the counterculture -- implemented
    new technologies to pursue their own projects, which partly complemented
    and partly contradicted one another. As we know today, the first three
    groups still cooperate closely with each other. To a great extent, this
    has allowed the military and corporations, which are willingly supported
    by researchers in need of funding, to determine the technology and thus
    aspects of the social and cultural agendas that depend on it.

    The software developers\' immediate environment experienced its first
    major change in the late 1970s. Software, which for many had been a mere
    supplement to more expensive and highly specialized hardware, became a
    marketable good with stringent licensing restrictions. A new generation
    of businesses, led by Bill Gates, suddenly began to label co­operation
    among programmers as theft.[^90^](#c1-note-0090){#c1-note-0090a}
    Previously it had been par for the course, and above all necessary, for
    programmers to share software with one another. The former culture of
    horizontal cooperation between developers transformed into a
    hierarchical and commercially oriented relation between developers and
    users (many of whom, at least at the beginning, had developed programs
    of their own). For the first time, copyright came to play an important
    role in digital culture. In order to survive in this environment, the
    practice of open cooperation had to be placed on a new legal foundation.
    Copyright law, which served to separate programmers (producers) from
    users (consumers), had to be neutralized or circumvented. The first step
    in this direction was taken in 1984 by the activist and programmer
    Richard Stallman. Composed by Stallman, the GNU General Public License
    was and remains a brilliant hack that uses the letter of copyright law
    against its own spirit. This happens in the form of a license that
    defines "four freedoms":

    1. The freedom to run the program as you wish, for any purpose (freedom
    0).
    2. The freedom to study how the program works and change it so it does
    your computing as you wish (freedom 1).
    3. The freedom to redistribute copies so you can help your neighbor
    (freedom 2).[]{#Page_54 type="pagebreak" title="54"}
    4. The freedom to distribute copies of your modified versions to others
    (freedom 3). By doing this you can give the whole community a chance
    to benefit from your changes.[^91^](#c1-note-0091){#c1-note-0091a}

    Thanks to this license, people who were personally unacquainted and did
    not share a common social environment could now cooperate (freedoms 2
    and 3) and simultaneously remain autonomous and unrestricted (freedoms 0
    and 1). For many, the tension between the need to develop complex
    software in large teams and the desire to maintain one\'s own autonomy
    represented an incentive to try out new forms of
    cooperation.[^92^](#c1-note-0092){#c1-note-0092a}

    Stallman\'s influence was at first limited to a small circle of
    programmers. In the middle of the 1980s, the goal of developing a
    completely free operating system seemed a distant one. Communication
    between those interested in doing so was often slow and complicated. In
    part, program codes still had to be sent by mail. It was not until the
    beginning of the 1990s that students in technical departments at many
    universities could access the
    internet.[^93^](#c1-note-0093){#c1-note-0093a} One of the first to use
    these new opportunities in an innovative way was a Finnish student named
    Linus Torvalds. He built upon Stallman\'s work and programmed a kernel,
    which, as the most important module of an operating system, governs the
    interaction between hardware and software. He published the first free
    version of this in 1991 and encouraged anyone interested to give him
    feedback.[^94^](#c1-note-0094){#c1-note-0094a} And it poured in.
    Torvalds reacted promptly and issued new versions of his software in
    quick succession. Instead of understanding his software as a finished
    product, he treated it like an open-ended process. This, in turn,
    motiv­ated even more developers to participate, because they saw that
    their contributions were being adopted swiftly, which led to the
    formation of an open community of interested programmers who swapped
    ideas over the internet and continued writing software. In order to
    maintain an overview of the different versions of the program, which
    appeared in parallel with one another, it soon became necessary to
    employ specialized platforms. The fusion of social processes --
    horizontal and voluntary cooperation among developers -- and
    technological platforms, which enabled this form of cooperation
    []{#Page_55 type="pagebreak" title="55"}by providing archives, filter
    functions, and search capabil­ities that made it possible to organize
    large amounts of data, was thus advanced even further. The programmers
    were no longer primarily working on the development of the internet
    itself, which by then was functioning quite reliably, but were rather
    using the internet to apply their cooperative principles to other
    arenas. By the end of the 1990s, the free-software movement had
    established a new, internet-based form of organization and had
    demonstrated its efficiency in practice: horizontal, informal
    communities of actors -- voluntary, autonomous, and focused on a common
    interest -- that, on the basis of high-tech infrastructure, could
    include thousands of people without having to create formal hierarchies.
    :::
    :::

    ::: {.section}
    From the Margins to the Center of Society {#c1-sec-0013}
    -----------------------------------------

    It was around this same time that the technologies in question, which
    were already no longer very new, entered mainstream society. Within a
    few years, the internet became part of everyday life. Three years before
    the turn of the millennium, only about 6 percent of the entire German
    population used the internet, often only occasionally. Three years after
    the millennium, the number of users already exceeded 53 percent. Since
    then, this share has increased even further. In 2014, it was more than
    97 percent for people under the age of
    40.[^95^](#c1-note-0095){#c1-note-0095a} Parallel to these developments,
    data transfer rates increased considerably, broadband connections ousted
    the need for dial-up modems, and the internet was suddenly "here" and no
    longer "there." With the spread of mobile devices, especially since the
    year 2007 when the first iPhone was introduced, digital communication
    became available both extensively and continuously. Since then, the
    internet has been ubiquitous. The amount of time that users spend online
    has increased and, with the rapid ascent of social mass media such as
    Facebook, people have been online in almost every situation and
    circumstance in life.[^96^](#c1-note-0096){#c1-note-0096a} The internet,
    like water or electricity, has become for many people a utility that is
    simply taken for granted.

    In a BBC survey from 2010, 80 percent of those polled believed that
    internet access -- a precondition for participating []{#Page_56
    type="pagebreak" title="56"}in the now dominant digital condition --
    should be regarded as a fundamental human right. This idea was most
    popular in South Korea (96 percent) and Mexico (94 percent), while in
    Germany at least 72 percent were of the same
    opinion.[^97^](#c1-note-0097){#c1-note-0097a}

    On the basis of this new infrastructure, which is now relevant in all
    areas of life, the cultural developments described above have been
    severed from the specific historical conditions from which they emerged
    and have permeated society as a whole. Expressivity -- the ability to
    communicate something "unique" -- is no longer a trait of artists and
    know­ledge workers alone, but rather something that is required by an
    increasingly broader stratum of society and is already being taught in
    schools. Users of social mass media must produce (themselves). The
    development of specific, differentiated identities and the demand that
    each be treated equally are no longer promoted exclusively by groups who
    have to struggle against repression, existential threats, and
    marginalization, but have penetrated deeply into the former mainstream,
    not least because the present forms of capitalism have learned to profit
    from the spread of niches and segmentation. When even conservative
    parties have abandoned the idea of a "leading culture," then cultural
    differences can no longer be classified by enforcing an absolute and
    indisputable hierarchy, the top of which is occupied by specific
    (geographical and cultural) centers. Rather, a space has been opened up
    for endless negotiations, a space in which -- at least in principle --
    everything can be called into question. This is not, of course, a
    peaceful and egalitarian process. In addition to the practical hurdles
    that exist in polarizing societies, there are also violent backlashes
    and new forms of fundamentalism that are attempting once again to remove
    certain religious, social, cultural, or political dimensions of
    existence from the discussion. Yet these can only be understood in light
    of a sweeping cultural transformation that has already reached
    mainstream society.[^98^](#c1-note-0098){#c1-note-0098a} In other words,
    the digital condition has become quotidian and dominant. It forms a
    cultural constellation that determines all areas of life, and its
    characteristic features are clearly recognizable. These will be the
    focus of the next chapter.[]{#Page_57 type="pagebreak" title="57"}
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#c1-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#c1-note-0001a){#c1-note-0001}  Kathrin Passig and Sascha Lobo,
    *Internet: Segen oder Fluch* (Berlin: Rowohlt, 2012) \[--trans.\].

    [2](#c1-note-0002a){#c1-note-0002}  The expression "heteronormatively
    behaving" is used here to mean that, while in the public eye, the
    behavior of the people []{#Page_177 type="pagebreak" title="177"}in
    question conformed to heterosexual norms regardless of their personal
    sexual orientations.

    [3](#c1-note-0003a){#c1-note-0003}  No order is ever entirely closed
    off. In this case, too, there was also room for exceptions and for
    collective moments of greater cultural multiplicity. That said, the
    social openness of the end of the 1920s, for instance, was restricted to
    particular milieus within large cities and was accordingly short-lived.

    [4](#c1-note-0004a){#c1-note-0004}  Fritz Machlup, *The Political
    Economy of Monopoly: Business, Labor and Government Policies*
    (Baltimore, MD: The Johns Hopkins University Press, 1952).

    [5](#c1-note-0005a){#c1-note-0005}  Machlup was a student of Ludwig von
    Mises, the most influential representative of this radically
    individualist school. See Hans-Hermann Hoppe, "Die Österreichische
    Schule und ihre Bedeutung für die moderne Wirtschaftswissenschaft," in
    Karl-Dieter Grüske (ed.), *Die Gemeinwirtschaft: Kommentarband zur
    Neuauflage von Ludwig von Mises' "Die Gemeinwirtschaft"* (Düsseldorf:
    Verlag Wirtschaft und Finanzen, 1996), pp. 65--90.

    [6](#c1-note-0006a){#c1-note-0006}  Fritz Machlup, *The Production and
    Distribution of Knowledge in the United States* (New York: John Wiley &
    Sons, 1962).

    [7](#c1-note-0007a){#c1-note-0007}  The term "knowledge worker" had
    already been introduced to the discussion a few years before; see Peter
    Drucker, *Landmarks of Tomorrow: A Report on the New* (New York: Harper,
    1959).

    [8](#c1-note-0008a){#c1-note-0008}  Peter Ecker, "Die
    Verwissenschaftlichung der Industrie: Zur Geschichte der
    Industrieforschung in den europäischen und amerikanischen
    Elektrokonzernen 1890--1930," *Zeitschrift für Unternehmensgeschichte*
    35 (1990): 73--94.

    [9](#c1-note-0009a){#c1-note-0009}  Edward Bernays was the son of
    Sigmund Freud\'s sister Anna and Ely Bernays, the brother of Freud\'s
    wife, Martha Bernays.

    [10](#c1-note-0010a){#c1-note-0010}  Edward L. Bernays, *Propaganda*
    (New York: Horace Liverlight, 1928).

    [11](#c1-note-0011a){#c1-note-0011}  James Beniger, *The Control
    Revolution: Technological and Economic Origins of the Information
    Society* (Cambridge, MA: Harvard University Press, 1986), p. 350.

    [12](#c1-note-0012a){#c1-note-0012}  Norbert Wiener, *Cybernetics: Or
    Control and Communication in the Animal and the Machine* (New York: J.
    Wiley, 1948).

    [13](#c1-note-0013a){#c1-note-0013}  Daniel Bell, *The Coming of
    Post-Industrial Society: A Venture in Social Forecasting* (New York:
    Basic Books, 1973).

    [14](#c1-note-0014a){#c1-note-0014}  Simon Nora and Alain Minc, *The
    Computerization of Society: A Report to the President of France*
    (Cambridge, MA: MIT Press, 1980).

    [15](#c1-note-0015a){#c1-note-0015}  Manuel Castells, *The Rise of the
    Network Society* (Oxford: Blackwell, 1996).

    [16](#c1-note-0016a){#c1-note-0016}  Hans-Dieter Kübler, *Mythos
    Wissensgesellschaft: Gesellschaft­licher Wandel zwischen Information,
    Medien und Wissen -- Eine Einführung* (Wiesbaden: Verlag für
    Sozialwissenschaften, 2009).[]{#Page_178 type="pagebreak" title="178"}

    [17](#c1-note-0017a){#c1-note-0017}  Luc Boltanski and Ève Chiapello,
    *The New Spirit of Capitalism*, trans. Gregory Elliott (London: Verso,
    2005).

    [18](#c1-note-0018a){#c1-note-0018}  Michael Piore and Charles Sabel,
    *The Second Industrial Divide: Possibilities of Prosperity* (New York:
    Basic Books, 1984).

    [19](#c1-note-0019a){#c1-note-0019}  Castells, *The Rise of the Network
    Society*. For a critical evaluation of Castells\'s work, see Felix
    Stalder, *Manuel Castells and the Theory of the Network Society*
    (Cambridge: Polity, 2006).

    [20](#c1-note-0020a){#c1-note-0020}  "UK Creative Industries Mapping
    Documents" (1998); quoted from Terry Flew, *The Creative Industries:
    Culture and Policy* (Los Angeles, CA: Sage, 2012), pp. 9--10.

    [21](#c1-note-0021a){#c1-note-0021}  The rise of the creative
    industries, and the hope that they inspired among politicians, did not
    escape criticism. Among the first works to draw attention to the
    precarious nature of working in such industries was Angela McRobbie\'s
    *British Fashion Design: Rag Trade or Image Industry?* (New York:
    Routledge, 1998).

    [22](#c1-note-0022a){#c1-note-0022}  This definition is not without a
    degree of tautology, given that economic growth is based on talent,
    which itself is defined by its ability to create new jobs; that is,
    economic growth. At the same time, he employs the term "talent" in an
    extremely narrow sense. Apparently, if something has nothing to do with
    job creation, it also has nothing to do with talent or creativity. All
    forms of creativity are thus measured and compared according to a common
    criterion.

    [23](#c1-note-0023a){#c1-note-0023}  Richard Florida, *Cities and the
    Creative Class* (New York: Routledge, 2005), p. 5.

    [24](#c1-note-0024a){#c1-note-0024}  One study has reached the
    conclusion that, despite mass participation, "a new form of
    communicative elite has developed, namely digitally and technically
    versed actors who inform themselves in this way, exchange ideas and thus
    gain influence. For them, the possibilities of platforms mainly
    represent an expansion of useful tools. Above all, the dissemination of
    digital technology makes it easier for versed and highly networked
    individuals to convey their news more simply -- and, for these groups of
    people, it lowers the threshold for active participation." Michael
    Bauer, "Digitale Technologien und Partizipation," in Clara Landler et
    al. (eds), *Netzpolitik in Österreich: Internet, Macht, Menschenrechte*
    (Krems: Donau-Universität Krems, 2013), pp. 219--24, at 224
    \[--trans.\].

    [25](#c1-note-0025a){#c1-note-0025}  Boltanski and Chiapello, *The New
    Spirit of Capitalism*.

    [26](#c1-note-0026a){#c1-note-0026}  According to Wikipedia,
    "Heteronormativity is the belief that people fall into distinct and
    complementary genders (man and woman) with natural roles in life. It
    assumes that heterosexuality is the only sexual orientation or only
    norm, and states that sexual and marital relations are most (or only)
    fitting between people of opposite sexes."[]{#Page_179 type="pagebreak"
    title="179"}

    [27](#c1-note-0027a){#c1-note-0027}  Jannis Plastargias, *RotZSchwul:
    Der Beginn einer Bewegung (1971--1975)* (Berlin: Querverlag, 2015).

    [28](#c1-note-0028a){#c1-note-0028}  Helmut Ahrens et al. (eds),
    *Tuntenstreit: Theoriediskussion der Homosexuellen Aktion Westberlin*
    (Berlin: Rosa Winkel, 1975), p. 4.

    [29](#c1-note-0029a){#c1-note-0029}  Susanne Regener and Katrin Köppert
    (eds), *Privat/öffentlich: Mediale Selbstentwürfe von Homosexualität*
    (Vienna: Turia + Kant, 2013).

    [30](#c1-note-0030a){#c1-note-0030}  Such, for instance, was the
    assessment of Manfred Bruns, the spokesperson for the Lesbian and Gay
    Association in Germany, in his text "Schwulenpolitik früher" (link no
    longer active). From today\'s perspective, however, the main problem
    with this event was the unclear position of the Green Party with respect
    to pedophilia. See Franz Walter et al. (eds), *Die Grünen und die
    Pädosexualität: Eine bundesdeutsche Geschichte* (Göttingen: Vandenhoeck
    & Ruprecht, 2014).

    [31](#c1-note-0031a){#c1-note-0031}  "AIDS: Tödliche Seuche," *Der
    Spiegel* 23 (1983) \[--trans.\].

    [32](#c1-note-0032a){#c1-note-0032}  Quoted from Frank Niggemeier, "Gay
    Pride: Schwules Selbst­bewußtsein aus dem Village," in Bernd Polster
    (ed.), *West-Wind: Die Amerikanisierung Europas* (Cologne: Dumont,
    1995), pp. 179--87, at 184 \[--trans.\].

    [33](#c1-note-0033a){#c1-note-0033}  Quoted from Regener and Köppert,
    *Privat/öffentlich*, p. 7 \[--trans.\].

    [34](#c1-note-0034a){#c1-note-0034}  Hans-Peter Buba and László A.
    Vaskovics, *Benachteiligung gleichgeschlechtlich orientierter Personen
    und Paare: Studie im Auftrag des Bundesministerium der Justiz* (Cologne:
    Bundes­anzeiger, 2001).

    [35](#c1-note-0035a){#c1-note-0035}  This process of internal
    differentiation has not yet reached its conclusion, and thus the
    acronyms have become longer and longer: LGBPTTQQIIAA+ stands for
    lesbian, gay, bisexual, pansexual, transgender, transsexual, queer,
    questioning, intersex, intergender, asexual, ally.
    [36](#c1-note-0036a){#c1-note-0036}  Judith Butler, *Gender Trouble:
    Feminism and the Subversion of Identity* (New York: Routledge, 1989).

    [37](#c1-note-0037a){#c1-note-0037}  Andreas Krass, "Queer Studies: Eine
    Einführung," in Krass (ed.), *Queer denken: Gegen die Ordnung der
    Sexualität* (Frankfurt am Main: Suhrkamp, 2003), pp. 7--27.

    [38](#c1-note-0038a){#c1-note-0038}  Edward W. Said, *Orientalism* (New
    York: Vintage Books, 1978).

    [39](#c1-note-0039a){#c1-note-0039}  Kark August Wittfogel, *Oriental
    Despotism: A Comparative Study of Total Power* (New Haven, CT: Yale
    University Press, 1957).

    [40](#c1-note-0040a){#c1-note-0040}  Silke Förschler, *Bilder des Harem:
    Medienwandel und kultereller Austausch* (Berlin: Reimer, 2010).

    [41](#c1-note-0041a){#c1-note-0041}  The selection and effectiveness of
    these images is not a coincidence. Camel was one of the first brands of
    cigarettes for []{#Page_180 type="pagebreak" title="180"}which
    advertising, in the sense described above, was used in a systematic
    manner.

    [42](#c1-note-0042a){#c1-note-0042}  This would not exclude feelings of
    regret about the loss of an exotic and romantic way of life, such as
    those of T. E. Lawrence, whose activities in the Near East during the
    First World War were memorialized in the film *Lawrence of Arabia*
    (1962).

    [43](#c1-note-0043a){#c1-note-0043}  Said has often been criticized,
    however, for portraying orientalism so dominantly that there seems to be
    no way out of the existing dependent relations. For an overview of the
    debates that Said has instigated, see María do Mar Castro Varela and
    Nikita Dhawan, *Postkoloniale Theorie: Eine kritische Ein­führung*
    (Bielefeld: Transcript, 2005), pp. 37--46.

    [44](#c1-note-0044a){#c1-note-0044}  "Migration führt zu 'hybrider'
    Gesellschaft" (an interview with Homi K. Bhabha), *ORF Science*
    (November 9, 2007), online \[--trans.\].

    [45](#c1-note-0045a){#c1-note-0045}  Homi K. Bhabha, *The Location of
    Culture* (New York: Routledge, 1994), p. 4.

    [46](#c1-note-0046a){#c1-note-0046}  Elisabeth Bronfen and Benjamin
    Marius, "Hybride Kulturen: Einleitung zur anglo-amerikanischen
    Multikulturismusdebatte," in Bronfen et al. (eds), *Hybride Kulturen*
    (Tübingen: Stauffenburg), pp. 1--30, at 8 \[--trans.\].

    [47](#c1-note-0047a){#c1-note-0047}  "What Is Postcolonial Thinking? An
    Interview with Achille Mbembe," *Eurozine* (December 2006), online.

    [48](#c1-note-0048a){#c1-note-0048}  Migrants have always created their
    own culture, which deals in various ways with the experience of
    migration itself, but non-migrant populations have long tended to ignore
    this. Things have now begun to change in this regard, for instance
    through Imra Ayata and Bülent Kullukcu\'s compilation of songs by the
    Turkish diaspora of the 1970s and 1980s: *Songs of Gastarbeiter*
    (Munich: Trikont, 2013).

    [49](#c1-note-0049a){#c1-note-0049}  The conference programs can be
    found at: \<\>.

    [50](#c1-note-0050a){#c1-note-0050}  "Deutschland entwickelt sich zu
    einem attraktiven Einwanderungsland für hochqualifizierte Zuwanderer,"
    press release by the CDU/CSU Alliance in the German Parliament (June 4,
    2014), online \[--trans.\].

    [51](#c1-note-0051a){#c1-note-0051}  Andreas Reckwitz, *Die Erfindung
    der Kreativität: Zum Prozess gesellschaftlicher Ästhetisierung* (Berlin:
    Suhrkamp, 2011), p. 180 \[--trans.\]. An English translation of this
    book is forthcoming: *The Invention of Creativity: Modern Society and
    the Culture of the New*, trans. Steven Black (Cambridge: Polity, 2017).

    [52](#c1-note-0052a){#c1-note-0052}  Gert Selle, *Geschichte des Design
    in Deutschland* (Frankfurt am Main: Campus, 2007).

    [53](#c1-note-0053a){#c1-note-0053}  "Less Is More: The Design Ethos of
    Dieter Rams," *SFMOMA* (June 29, 2011), online.[]{#Page_181
    type="pagebreak" title="181"}

    [54](#c1-note-0054a){#c1-note-0054}  The cybernetic perspective was
    introduced to the field of design primarily by Buckminster Fuller. See
    Diedrich Diederichsen and Anselm Franke, *The Whole Earth: California
    and the Disappearance of the Outside* (Berlin: Sternberg, 2013).

    [55](#c1-note-0055a){#c1-note-0055}  Clive Dilnot, "Design as a Socially
    Significant Activity: An Introduction," *Design Studies* 3/3 (1982):
    139--46.

    [56](#c1-note-0056a){#c1-note-0056}  Victor J. Papanek, *Design for the
    Real World: Human Ecology and Social Change* (New York: Pantheon, 1972),
    p. 2.

    [57](#c1-note-0057a){#c1-note-0057}  Reckwitz, *Die Erfindung der
    Kreativität*.

    [58](#c1-note-0058a){#c1-note-0058}  B. Joseph Pine and James H.
    Gilmore, *The Experience Economy: Work Is Theater and Every Business Is
    a Stage* (Boston, MA: Harvard Business School Press, 1999), p. ix (the
    emphasis is original).

    [59](#c1-note-0059a){#c1-note-0059}  Mona El Khafif, *Inszenierter
    Urbanismus: Stadtraum für Kunst, Kultur und Konsum im Zeitalter der
    Erlebnisgesellschaft* (Saarbrücken: VDM Verlag Dr. Müller, 2013).

    [60](#c1-note-0060a){#c1-note-0060}  Konrad Becker and Martin Wassermair
    (eds), *Phantom Kulturstadt* (Vienna: Löcker, 2009).

    [61](#c1-note-0061a){#c1-note-0061}  See, for example, Andres Bosshard,
    *Stadt hören: Klang­spaziergänge durch Zürich* (Zurich: NZZ Libro,
    2009).

    [62](#c1-note-0062a){#c1-note-0062}  "An alternate realty game (ARG),"
    according to Wikipedia, "is an interactive networked narrative that uses
    the real world as a platform and employs transmedia storytelling to
    deliver a story that may be altered by players\' ideas or actions."

    [63](#c1-note-0063a){#c1-note-0063}  Eric von Hippel, *Democratizing
    Innovation* (Cambridge, MA: MIT Press, 2005).

    [64](#c1-note-0064a){#c1-note-0064}  It is often the case that the
    involvement of users simply serves to increase the efficiency of
    production processes and customer service. Many activities that were
    once undertaken at the expense of businesses now have to be carried out
    by the customers themselves. See Günter Voss, *Der arbeitende Kunde:
    Wenn Konsumenten zu unbezahlten Mitarbeitern werden* (Frankfurt am Main:
    Campus, 2005).

    [65](#c1-note-0065a){#c1-note-0065}  Beniger, *The Control Revolution*,
    pp. 411--16.

    [66](#c1-note-0066a){#c1-note-0066}  Louis Althusser, "Ideology and
    Ideological State Apparatuses (Notes towards an Investigation)," in
    Althusser, *Lenin and Philosophy and Other Essays*, trans. Ben Brewster
    (New York: Monthly Review Press, 1971), pp. 127--86.

    [67](#c1-note-0067a){#c1-note-0067}  Florian Becker et al. (eds),
    *Gramsci lesen! Einstiege in die Gefängnis­hefte* (Hamburg: Argument,
    2013), pp. 20--35.

    [68](#c1-note-0068a){#c1-note-0068}  Guy Debord, *The Society of the
    Spectacle*, trans. Fredy Perlman and Jon Supak (Detroit: Black & Red,
    1977).

    [69](#c1-note-0069a){#c1-note-0069}  Derrick de Kerckhove, "McLuhan and
    the Toronto School of Communication," *Canadian Journal of
    Communication* 14/4 (1989): 73--9.[]{#Page_182 type="pagebreak"
    title="182"}

    [70](#c1-note-0070a){#c1-note-0070}  Marshall McLuhan, *Understanding
    Media: The Extensions of Man* (New York: McGraw-Hill, 1964).

    [71](#c1-note-0071a){#c1-note-0071}  Nam Jun Paik, "Exposition of Music
    -- Electronic Television" (leaflet accompanying the exhibition). Quoted
    from Zhang Ga, "Sounds, Images, Perception and Electrons," *Douban*
    (March 3, 2016), online.

    [72](#c1-note-0072a){#c1-note-0072}  Laura R. Linder, *Public Access
    Television: America\'s Electronic Soapbox* (Westport, CT: Praeger,
    1999).

    [73](#c1-note-0073a){#c1-note-0073}  Hans Magnus Enzensberger,
    "Constituents of a Theory of the Media," in Noah Wardrip-Fruin and Nick
    Montfort (eds), *The New Media Reader* (Cambridge, MA: MIT Press, 2003),
    pp. 259--75.

    [74](#c1-note-0074a){#c1-note-0074}  Paul Groot, "Rabotnik TV,"
    *Mediamatic* 2/3 (1988), online.

    [75](#c1-note-0075a){#c1-note-0075}  Inke Arns, "Social Technologies:
    Deconstruction, Subversion and the Utopia of Democratic Communication,"
    *Medien Kunst Netz* (2004), online.

    [76](#c1-note-0076a){#c1-note-0076}  The term was coined at a series of
    conferences titled The Next Five Minutes (N5M), which were held in
    Amsterdam from 1993 to 2003. See \<\>.

    [77](#c1-note-0077a){#c1-note-0077}  Mark Dery, *Culture Jamming:
    Hacking, Slashing and Sniping in the Empire of Signs* (Westfield: Open
    Media, 1993); Luther Blisset et al., *Handbuch der
    Kommunikationsguerilla*, 5th edn (Berlin: Assoziationen A, 2012).

    [78](#c1-note-0078a){#c1-note-0078}  Critical Art Ensemble, *Electronic
    Civil Disobedience and Other Unpopular Ideas* (New York: Autonomedia,
    1996).

    [79](#c1-note-0079a){#c1-note-0079}  Today this method is known as a
    "distributed denial of service attack" (DDOS).

    [80](#c1-note-0080a){#c1-note-0080}  Max Weber, *Economy and Society: An
    Outline of Interpretive Sociology*, trans. Guenther Roth and Claus
    Wittich (Berkeley, CA: University of California Press, 1978), pp. 26--8.

    [81](#c1-note-0081a){#c1-note-0081}  Ernst Friedrich Schumacher, *Small
    Is Beautiful: Economics as if People Mattered*, 8th edn (New York:
    Harper Perennial, 2014).

    [82](#c1-note-0082a){#c1-note-0082}  Fred Turner, *From Counterculture
    to Cyberculture: Stewart Brand, the Whole Earth Movement and the Rise of
    Digital Utopianism* (Chicago, IL: University of Chicago Press, 2006), p.
    21. In this regard, see also the documentary films *Das Netz* by Lutz
    Dammbeck (2003) and *All Watched Over by Machines of Loving Grace* by
    Adam Curtis (2011).

    [83](#c1-note-0083a){#c1-note-0083}  It was possible to understand
    cybernetics as a language of free markets or also as one of centralized
    planned economies. See Slava Gerovitch, *From Newspeak to Cyberspeak: A
    History of Soviet Cybernetics* (Cambridge, MA: MIT Press, 2002). The
    great interest of Soviet scientists in cybernetics rendered the term
    rather suspicious in the West, where it was disassociated from
    artificial intelligence.[]{#Page_183 type="pagebreak" title="183"}

    [84](#c1-note-0084a){#c1-note-0084}  Claus Pias, "The Age of
    Cybernetics," in Pias (ed.), *Cybernetics: The Macy Conferences
    1946--1953* (Zurich: Diaphanes, 2016), pp. 11--27.

    [85](#c1-note-0085a){#c1-note-0085}  Norbert Wiener, one of the
    cofounders of cybernetics, explained this as follows in 1950: "In giving
    the definition of Cybernetics in the original book, I classed
    communication and control together. Why did I do this? When I
    communicate with another person, I impart a message to him, and when he
    communicates back with me he returns a related message which contains
    information primarily accessible to him and not to me. When I control
    the actions of another person, I communicate a message to him, and
    although this message is in the imperative mood, the technique of
    communication does not differ from that of a message of fact.
    Furthermore, if my control is to be effective I must take cognizance of
    any messages from him which may indicate that the order is understood
    and has been obeyed." Norbert Wiener, *The Human Use of Human Beings:
    Cybernetics and Society*, 2nd edn (London: Free Association Books,
    1989), p. 16.

    [86](#c1-note-0086a){#c1-note-0086}  Though presented here as distinct,
    these interests could in fact be held by one and the same person. In
    *From Counterculture to Cyberculture*, for instance, Turner discusses
    countercultural entrepreneurs.
    [87](#c1-note-0087a){#c1-note-0087}  Richard Brautigan, "All Watched
    Over by Machines of Loving Grace," in *All Watched Over by Machines of
    Loving Grace*, by Brautigan (San Francisco: The Communication Company,
    1967).

    [88](#c1-note-0088a){#c1-note-0088}  David D. Clark, "A Cloudy Crystal
    Ball: Visions of the Future," *Internet Engineering Taskforce* (July
    1992), online.

    [89](#c1-note-0089a){#c1-note-0089}  Castells, *The Rise of the Network
    Society*.

    [90](#c1-note-0090a){#c1-note-0090}  Bill Gates, "An Open Letter to
    Hobbyists," *Homebrew Computer Club Newsletter* 2/1 (1976): 2.

    [91](#c1-note-0091a){#c1-note-0091}  Richard Stallman, "What Is Free
    Software?", *GNU Operating System*, online.

    [92](#c1-note-0092a){#c1-note-0092}  The fundamentally cooperative
    nature of programming was recognized early on. See Gerald M. Weinberg,
    *The Psychology of Computer Programming*, rev. edn (New York: Dorset
    House, 1998 \[originally published in 1971\]).

    [93](#c1-note-0093a){#c1-note-0093}  On the history of free software,
    see Volker Grassmuck, *Freie Software: Zwischen Privat- und
    Gemeineigentum* (Berlin: Bundeszentrale für politische Bildung, 2002).

    [94](#c1-note-0094a){#c1-note-0094}  In his first email on the topic, he
    wrote: "Hello everybody out there \[...\]. I'm doing a (free) operating
    system (just a hobby, won\'t be big and professional like gnu) \[...\].
    This has been brewing since April, and is starting to get ready. I\'d
    like any feedback on things people like/dislike." Linus Torvalds, "What
    []{#Page_184 type="pagebreak" title="184"}Would You Like to See Most in
    Minix," *Usenet Group* (August 1991), online.

    [95](#c1-note-0095a){#c1-note-0095}  ARD/ZDF, "Onlinestudie" (2015),
    online.

    [96](#c1-note-0096a){#c1-note-0096}  From 1997 to 2003, the average use
    of online media in Germany climbed from 76 to 138 minutes per day, and
    by 2013 it reached 169 minutes. Over the same span of time, the average
    frequency of use increased from 3.3 to 4.4 days per week, and by 2013 it
    was 5.8. From 2007 to 2013, the percentage of people who were members of
    private social networks like Facebook grew from 15 percent to 46
    percent. Of these, nearly 60 percent -- around 19 million people -- used
    such services on a daily basis. The source of this information is the
    article cited in the previous note.

    [97](#c1-note-0097a){#c1-note-0097}  "Internet Access Is 'a Fundamental
    Right'," *BBC News* (8 March 2010), online.

    [98](#c1-note-0098a){#c1-note-0098}  Manuel Castells, *The Power of
    Identity* (Oxford: Blackwell, 1997), pp. 7--22.
    :::
    :::

    [II]{.chapterNumber} [Forms]{.chapterTitle} {#c2}

    ::: {.section}
    With the emergence of the internet around the turn of the millennium as
    an omnipresent infrastructure for communication and coordination,
    previously independent cultural developments began to spread beyond
    their specific original contexts, mutually influencing and enhancing one
    another, and becoming increasingly intertwined. Out of a disconnected
    conglomeration of more or less marginalized practices, a new and
    specific cultural environment thus took shape, usurping or marginalizing
    an ever greater variety of cultural constellations. The following
    discussion will focus on three *forms* of the digital condition; that
    is, on those formal qualities that (notwithstanding all of its internal
    conflicts and contradictions) lend a particular shape to this cultural
    environment as a whole: *referentiality*, *communality*, and
    *algorithmicity*. It is only because most of the cultural processes
    operating under the digital condition are characterized by common formal
    features such as these that it is reasonable to speak of the digital
    condition in the singular.

    "Referentiality" is a method with which individuals can inscribe
    themselves into cultural processes and constitute themselves as
    producers. Understood as shared social meaning, the arena of culture
    entails that such an undertaking cannot be limited to the individual.
    Rather, it takes place within a larger framework whose existence and
    development depend on []{#Page_58 type="pagebreak" title="58"}communal
    formations. "Algorithmicity" denotes those aspects of cultural processes
    that are (pre-)arranged by the activities of machines. Algorithms
    transform the vast quantities of data and information that characterize
    so many facets of present-day life into dimensions and formats that can
    be registered by human perception. It is impossible to read the content
    of billions of websites. Therefore we turn to services such as Google\'s
    search algorithm, which reduces the data flood ("big data") to a
    manageable amount and translates it into a format that humans can
    understand ("small data"). Without them, human beings could not
    comprehend or do anything within a culture built around digital
    technologies, but they influence our understanding and activity in an
    ambivalent way. They create new dependencies by pre-sorting and making
    the (informational) world available to us, yet simultaneously ensure our
    autonomy by providing the preconditions that enable us to act.
    :::

    ::: {.section}
    Referentiality {#c2-sec-0002}
    --------------

    In the digital condition, one of the methods (if not *the* most
    fundamental method) enabling humans to participate -- alone or in groups
    -- in the collective negotiation of meaning is the system of creating
    references. In a number of arenas, referential processes play an
    important role in the assignment of both meaning and form. According to
    the art historian André Rottmann, for instance, "one might claim that
    working with references has in recent years become the dominant
    production-aesthetic model in contemporary
    art."[^1^](#c2-note-0001){#c2-note-0001a} This burgeoning engagement
    with references, however, is hardly restricted to the world of
    contemporary art. Referentiality is a feature of many processes that
    encompass the operations of various genres of professional and everyday
    culture. In its essence, it is the use of materials that are already
    equipped with meaning -- as opposed to so-called raw material -- to
    create new meanings. The referential techniques used to achieve this are
    extremely diverse, a fact reflected in the numerous terms that exist to
    describe them: re-mix, re-make, re-enactment, appropriation, sampling,
    meme, imitation, homage, tropicália, parody, quotation, post-production,
    re-performance, []{#Page_59 type="pagebreak" title="59"}camouflage,
    (non-academic) research, re-creativity, mashup, transformative use, and
    so on.

    These processes have two important aspects in common: the
    recognizability of the sources and the freedom to deal with them however
    one likes. The first creates an internal system of references from which
    meaning and aesthetics are derived in an essential
    manner.[^2^](#c2-note-0002){#c2-note-0002a} The second is the
    precondition enabling the creation of something that is both new and on
    the same level as the re-used material. This represents a clear
    departure from the historical--critical method, which endeavors to embed
    a source in its original context in order to re-determine its meaning,
    but also a departure from classical forms of rendition such as
    translations, adaptations (for instance, adapting a book for a film), or
    cover versions, which, though they translate a work into another
    language or medium, still attempt to preserve its original meaning.
    Re-mixes produced by DJs are one example of the referential treatment of
    source material. In his book on the history of DJ culture, the
    journalist Ulf Poschardt notes: "The remixer isn\'t concerned with
    salvaging authenticity, but with creating a new
    authenticity."[^3^](#c2-note-0003){#c2-note-0003a} For instead of
    distancing themselves from the past, which would follow the (Western)
    logic of progress or the spirit of the avant-garde, these processes
    refer explicitly to precursors and to existing material. In one and the
    same gesture, both one\'s own new position and the context and cultural
    tradition that is being carried on in one\'s own work are constituted
    performatively; that is, through one\'s own activity in the moment. I
    will discuss this phenomenon in greater depth below.

    To work with existing cultural material is, in itself, nothing new. In
    modern montages, artists likewise drew upon available texts, images, and
    treated materials. Yet there is an important difference: montages were
    concerned with bringing together seemingly incongruous but stable
    "finished pieces" in a more or less unmediated and fragmentary manner.
    This is especially clear in the collages by the Dadaists or in
    Expressionist literature such as Alfred Döblin\'s *Berlin
    Alexanderplatz*. In these works, the experience of Modernity\'s many
    fractures -- its fragmentation and turmoil -- was given a new aesthetic
    form. In his reference to montages, Adorno thus observed that the
    "negation of synthesis becomes a principle []{#Page_60 type="pagebreak"
    title="60"}of form."[^4^](#c2-note-0004){#c2-note-0004a} At least for a
    brief moment, he considered them an adequate expression for the
    impossibility of reconciling the contradictions of capitalist culture.
    Influenced by Adorno, the literary theorist Peter Bürger went so far as
    to call the montage the true "paradigm of
    modernity."[^5^](#c2-note-0005){#c2-note-0005a} In today\'s referential
    processes, on the contrary, pieces are not brought together as much as
    they are integrated into one another by being altered, adapted, and
    transformed. Unlike the older arrangement, it is not the fissures
    between elements that are foregrounded but rather their synthesis in the
    present. Conchita Wurst, the bearded diva, is not torn between two
    conflicting poles. Rather, she represents a successful synthesis --
    something new and harmonious that distinguishes itself by showcasing
    elements of the old order (man/woman) and simultaneously transcending
    them.

    This synthesis, however, is usually just temporary, for at any time it
    can itself serve as material for yet another rendering. Of course, this
    is far easier to pull off with digital objects than with analog objects,
    though these categories have become increasingly porous and thus
    increasingly problematic as opposites. More and more objects exist both
    in an analog and in a digital form. Think of photographs and slides,
    which have become so easy to digitalize. Even three-dimensional objects
    can now be scanned and printed. In the future, programmable materials
    with controllable and reversible features will cause the difference
    between the two domains to vanish: analog is becoming more and more
    digital.

    Montages and referential processes can only become widespread methods
    if, in a given society, cultural objects are available in three
    different respects. The first is economic and organizational: they must
    be affordable and easily accessible. Whoever is unable to afford books
    or get hold of them by some other means will not be able to reconfigure
    any texts. The second is cultural: working with cultural objects --
    which can always create deviations from the source in unpredictable ways
    -- must not be treated as taboo or illegal, but rather as an everyday
    activity without any special preconditions. It is much easier to
    manipulate a text from a secular newspaper than one from a religious
    canon. The third is material: it must be possible to use the material
    and to change it.[^6[]{#Page_61 type="pagebreak"
    title="61"}^](#c2-note-0006){#c2-note-0006a}

    In terms of this third form of availability, montages differ from
    referential processes, for cultural objects can be integrated into one
    another -- instead of simply being placed side by side -- far more
    readily when they are digitally coded. Information is digitally coded
    when it is stored by means of a limited system of discrete (that is,
    separated by finite intervals or distances) signs that are meaningless
    in themselves. This allows information to be copied from one carrier to
    another without any loss and it allows the respective signs, whether
    individually or in groups, to be arranged freely. Seen in this way,
    digital coding is not necessarily bound to computers but can rather be
    realized with all materials: a mosaic is a digital process in which
    information is coded by means of variously colored tiles, just as a
    digital image consists of pixels. In the case of the mosaic, of course,
    the resolution is far lower. Alphabetic writing is a form of coding
    linguistic information by means of discrete signs that are, in
    themselves, meaningless. Consequently, Florian Cramer has argued that
    "every form of literature that is recorded alphabetically and not based
    on analog parameters such as ideograms or orality is already digital in
    that it is stored in discrete
    signs."[^7^](#c2-note-0007){#c2-note-0007a} However, the specific
    features of the alphabet, as Marshall McLuhan repeatedly underscored,
    did not fully develop until the advent of the printing
    press.[^8^](#c2-note-0008){#c2-note-0008a} It was the printing press, in
    other words, that first abstracted written signs from analog handwriting
    and transformed them into standardized symbols that could be repeated
    without any loss of information. In this practical sense, the printing
    press made writing digital, with the result that dealing with texts soon
    became radically different.

    ::: {.section}
    ### Information overload 1.0 {#c2-sec-0003}

    The printing press made texts available in the three respects mentioned
    above. For one thing, their number increased rapidly, while their price
    significantly sank. During the first two generations after Gutenberg\'s
    invention -- that is, between 1450 and 1500 -- more books were produced
    than during the thousand years
    before.[^9^](#c2-note-0009){#c2-note-0009a} And that was just the
    beginning. Dealing with books and their content changed from the ground
    up. In manuscript culture, every new copy represented a potential
    degradation of the original, and therefore []{#Page_62 type="pagebreak"
    title="62"}the oldest sources (those that had undergone as little
    corruption as possible) were valued above all. With the advent of print
    culture, the idea took hold that texts could be improved by the process
    of editing, not least because the availability of old sources, through
    reprints and facsimiles, had also improved dramatically. Pure
    reproduction was mechanized and overcome as a cultural challenge.

    According to the historian Elizabeth Eisenstein, one of the first
    consequences of the greatly increased availability of the printed book
    was that it overcame the "tyranny of major authorities, which was common
    in small libraries."[^10^](#c2-note-0010){#c2-note-0010a} Scientists
    were now able to compare texts with one another and critique them to an
    unprecedented extent. Their general orientation turned around: instead
    of looking back in order to preserve what they knew, they were now
    looking ahead toward what they might not (yet) know.

    In order to organize this information flood of rapidly amassing texts,
    it was necessary to create new conventions: books were now specified by
    their author, publisher, and date of publication, not to mention
    furnished with page numbers. This enabled large numbers of texts to be
    catalogued and every individual text -- indeed, every single passage --
    to be referenced.[^11^](#c2-note-0011){#c2-note-0011a} Scientists could
    legitimize the pursuit of new knowledge by drawing attention to specific
    mistakes or gaps in existing texts. In the scientific culture that was
    developing at the time, the close connection between old and new
    ma­terial was not simply regarded as something positive; it was also
    urgently prescribed as a method of argumentation. Every text had to
    contain an internal system of references, and this was the basis for the
    development of schools, disciplines, and specific discourses.

    The digital character of printed writing also made texts available in
    the third respect mentioned above. Because discrete signs could be
    reproduced without any loss of information, it was possible not only to
    make perfect copies but also to remove content from one carrier and
    transfer it to another. Materials were no longer simply arranged
    sequentially, as in medieval compilations and almanacs, but manipulated
    to give rise to a new and independent fluid text. A set of conventions
    was developed -- one that remains in use today -- for modifying embedded
    or quoted material in order for it []{#Page_63 type="pagebreak"
    title="63"}to fit into its new environment. In this manner, quotations
    could be altered in such a way that they could be integrated seamlessly
    into a new text while remaining recognizable as direct citations.
    Several of these conventions, for instance the use of square brackets to
    indicate additions ("\[ \]") or ellipses to indicate omissions ("..."),
    are also used in this very book. At the same time, the conventions for
    making explicit references led to the creation of an internal reference
    system that made the singular position of the new text legible within a
    collective field of work. "Printing," to quote Elizabeth Eisenstein once
    again, "encouraged forms of combinatory activity which were social as
    well as intellectual. It changed relationships between men of learning
    as well as between systems of
    ideas."[^12^](#c2-note-0012){#c2-note-0012a} Exchange between scholars,
    in the form of letters and visits, intensified. The seventeenth century
    saw the formation of the *respublica literaria* or the "Republic of
    Letters," a loose network of scholars devoted to promoting the ideas of
    the Enlightenment. Beginning in the eighteenth century, the rapidly
    growing number of scientific fields was arranged and institutionalized
    into clearly distinct disciplines. In the nineteenth and twentieth
    centuries, diverse media-technical innovations made images, sounds, and
    moving images available, though at first only in analog formats. These
    created the preconditions that enabled the montage in all of its forms
    -- film cuts, collages, readymades, *musique concrète*, found-footage
    films, literary cut-ups, and artistic assemblages (to name only the
    best-known genres) -- to become the paradigm of Modernity.
    :::

    ::: {.section}
    ### Information overload 2.0 {#c2-sec-0004}

    It was not until new technical possibilities for recording, storing,
    processing, and reproduction appeared over the course of the 1990s that
    it also became increasingly possible to code and edit images, audio, and
    video digitally. Through the networking that was taking place not far
    behind, society was flooded with an unprecedented amount of digit­ally
    coded information *of every sort*, and the circulation of this
    information accelerated. This was not, however, simply a quantitative
    change but also and above all a qualitative one. Cultural materials
    became available in a comprehensive []{#Page_64 type="pagebreak"
    title="64"}sense -- economically and organizationally, culturally
    (despite legal problems), and materially (because digitalized). Today it
    would not be bold to predict that nearly every text, image, or sound
    will soon exist in a digital form. Most of the new reproducible works
    are already "born digital" and digit­ally distributed, or they are
    physically produced according to digital instructions. Many initiatives
    are working to digitalize older, analog works. We are now anchored in
    the digital.

    Among the numerous digitalization projects currently under way, the most
    ambitious is that of Google Books, which, since its launch in 2004, has
    digitalized around 20 million books from the collections of large
    libraries and prepared them for full-text searches. Right from the
    start, a fierce debate arose about the legal and cultural acceptability
    of this project. One concern was whether Google\'s process infringed
    upon the rights of the authors and publishers of the scanned books or
    whether, according to American law, it qualified as "fair use," in which
    case there would be no obligation for the company to seek authorization
    or offer compensation. The second main concern was whether it would be
    culturally or politically appropriate for a private corporation to hold
    a de facto monopoly over the digital heritage of book culture. The first
    issue incited a complex legal battle that, in 2013, was decided in
    Google\'s favor by a judge on the United States District Court in New
    York.[^13^](#c2-note-0013){#c2-note-0013a} At the heart of the second
    issue was the question of how a public library should look in the
    twenty-first century.[^14^](#c2-note-0014){#c2-note-0014a} In November
    of 2008, the European Commission and the cultural minister of the
    European Union launched the virtual Europeana library, which occurred
    after a number of European countries had already invested hundreds of
    millions of euros in various digitalization
    initiatives.[^15^](#c2-note-0015){#c2-note-0015a} Today, Europeana
    serves as a common access point to the online archives of around 2,500
    European cultural institutions. By the end of 2015, its digital holdings
    had grown to include more than 40 million objects. This is still,
    however, a relatively small number, for it has been estimated that
    European archives and museums contain more than 220 million
    natural-historical and more than 260 million cultural-historical
    objects. In the United States, discussions about the future of libraries
    []{#Page_65 type="pagebreak" title="65"}led to the 2013 launch of the
    Digital Public Library of America (DPLA), which, like Europeana,
    provides common access to the digitalized holdings of archives, museums,
    and libraries. By now, more than 14 million items can be viewed there.

    In one way or another, however, both the private and the public projects
    of this sort have been limited by binding copyright laws. The librarian
    and book historian Robert Darnton, one of the most prominent advocates
    of the Digital Public Library of America, has accordingly stated: "The
    main impediment to the DPLA\'s growth is legal, not financial. Copyright
    laws could exclude everything published after 1964, most works published
    after 1923, and some that go back as far as
    1873."[^16^](#c2-note-0016){#c2-note-0016a} The legal situation in
    Europe is similar to that in the United States. It, too, massively
    obstructs the work of public
    institutions.[^17^](#c2-note-0017){#c2-note-0017a} In many cases, this
    has had the absurd consequence that certain materials, though they have
    been fully digitalized, may only be accessed in part or exclusively
    inside the facilities of a particular institution. Whereas companies
    such as Google can afford to wage long legal battles, and in the
    meantime create precedents, public institutions must proceed with great
    caution, not least to avoid the accusation of using public funds to
    violate copyright laws. Thus, they tend to fade into the background and
    leave users, who are unfamiliar with the complex legal situation, with
    the impression that they are even more out-of-date than they often are.

    Informal actors, who explicitly operate beyond the realm of copyright
    law, are not faced with such restrictions. UbuWeb, for instance, which
    is the largest online archive devoted to the history of
    twentieth-century avant-garde art, was not created by an art museum but
    rather by the initiative of an individual artist, Kenneth Goldsmith.
    Since 1996, he has been collecting historically relevant materials that
    were no longer in distribution and placing them online for free and
    without any stipulations. He forgoes the process of obtaining the rights
    to certain works of art because, as he remarks on the website, "Let\'s
    face it, if we had to get permission from everyone on UbuWeb, there
    would be no UbuWeb."[^18^](#c2-note-0018){#c2-note-0018a} It would
    simply be too demanding to do so. Because he pursues the project without
    any financial interest and has saved so much []{#Page_66
    type="pagebreak" title="66"}from oblivion, his efforts have provoked
    hardly any legal difficulties. On the contrary, UbuWeb has become so
    important that Goldsmith has begun to receive more and more material
    directly from artists and their heirs, who would like certain works not
    to be forgotten. Nevertheless, or perhaps for this very reason,
    Goldsmith repeatedly stresses the instability of his archive, which
    could disappear at any moment if he loses interest in maintaining it or
    if something else happens. Users are therefore able to download works
    from UbuWeb and archive, on their own, whatever items they find most
    important. Of course, this fragility contradicts the idea of an archive
    as a place for long-term preservation. Yet such a task could only be
    undertaken by an institution that is oriented toward the long term.
    Because of the existing legal conditions, however, it is hardly likely
    that such an institution will come about.

    Whereas Goldsmith is highly adept at operating within a niche that not
    only tolerates but also accepts the violation of formal copyright
    claims, large websites responsible for the uncontrolled dissemination of
    digital content do not bother with such niceties. Their purpose is
    rather to ensure that all popular content is made available digitally
    and for free, whether legally or not. These sites, too, have experienced
    uninterrupted growth. By the end of 2015, dozens of millions of people
    were simultaneously using the BitTorrent tracker The Pirate Bay -- the
    largest nodal point for file-sharing networks during the last decade --
    to exchange several million digital files with one
    another.[^19^](#c2-note-0019){#c2-note-0019a} And this was happening
    despite protracted attempts to block or close down the file-sharing site
    by legal means and despite a variety of competing services. Even when
    the founders of the website were sentenced in Sweden to pay large fines
    (around €3 million) and to serve time in prison, the site still did not
    disappear from the internet.[^20^](#c2-note-0020){#c2-note-0020a} At the
    same time, new providers have entered the market of free access; their
    method is not to facilitate distributed downloads but rather to offer,
    on account of the drastically reduced cost of data transfers, direct
    streaming. Although some of these services are relatively easy to locate
    and some have been legally banned -- the best-known case in Germany
    being that of the popular site kino.to -- more of them continue to
    appear.[^21^](#c2-note-0021){#c2-note-0021a} Moreover, this phenomenon
    []{#Page_67 type="pagebreak" title="67"}is not limited to music and
    films, but encompasses all media formats. For instance, it is
    foreseeable that the number of freely available plans for 3D objects
    will increase along with the popularity of 3D printing. It has almost
    escaped notice, however, that so-called "shadow libraries" have been
    popping up everywhere; the latter are not accessible to the public but
    rather to members, for instance, of closed exchange platforms or of
    university intranets. Few seminars take place any more without a corpus
    of scanned texts, regardless of whether this practice is legal or
    not.[^22^](#c2-note-0022){#c2-note-0022a}

    The lines between these different mechanisms of access are highly
    permeable. Content acquired legally can make its way to file-sharing
    networks as an illegal copy; content available for free can be sold in
    special editions; content from shadow libraries can make its way to
    publicly accessible sites; and, conversely, content that was once freely
    available can disappear into shadow libraries. As regards free access,
    the details of this rapidly changing landscape are almost
    inconsequential, for the general trend that has emerged from these
    various dynamics -- legal and illegal, public and private -- is
    unambiguous: in a comprehensive and practical sense, cultural works of
    all sorts will become freely available despite whatever legal and
    technical restrictions might be in place. Whether absolutely all
    material will be made available in this way is not the decisive factor,
    at least not for the individual, for, as the German Library Association
    has stated, "it is foreseeable that non-digitalized material will
    increasingly escape the awareness of users, who have understandably come
    to appreciate the ubiquitous availability and more convenient
    processability of the digital versions of analog
    objects."[^23^](#c2-note-0023){#c2-note-0023a} In this context of excess
    information, it is difficult to determine whether a particular work or a
    crucial reference is missing, given that a multitude of other works and
    references can be found in their place.

    At the same time, prodigious amounts of new material are being produced
    that, before the era of digitalization and networks, never could have
    existed at all or never would have left the private sphere. An example
    of this is amateur photography. This is nothing new in itself; as early
    as 1899, Kodak was marketing its films and apparatus with the slogan
    "You press the button, we do the rest," and ever since, []{#Page_68
    type="pagebreak" title="68"}drawers and albums have been overflowing
    with photographs. With the advent of digitalization, however, certain
    economic and material limitations ceased to exist that, until then, had
    caused most private photographers to think twice about how many shots
    they wanted to take. After all, they had to pay for the film to be
    developed and then store the pictures somewhere. Cameras also became
    increasingly "intelligent," which improved the technical quality of
    photo­graphs. Even complex procedures such as increasing the level of
    detail or the contrast ratio -- the difference between an image\'s
    brightest and darkest points -- no longer require any specialized
    knowledge of photochemical processes in the darkroom. Today, such
    features are often pre-installed in many cameras as an option (high
    dynamic range). Ever since the introduction of built-in digital cameras
    for smartphones, anyone with such a device can take pictures everywhere
    and at any time and then store them digitally. Images can then be posted
    on online platforms and shared with others. By the middle of 2015,
    Flickr -- the largest but certainly not the only specialized platform of
    this sort -- had more than 112 million registered users participating in
    more than 2 million groups. Every user has access to free storage space
    for about half a million of his or her own pictures. At that point, in
    other words, the platform was equipped to manage more than 55 billion
    photographs. Around 3.5 million images were being uploaded every day,
    many of which could be accessed by anyone. This may seem like a lot, but
    in reality it is just a small portion of the pictures that are posted
    online on a daily basis. Around that same time -- again, the middle of
    2015 -- approximately 350 million pictures were being posted on Facebook
    *every day*. The total number of photographs saved there has been
    estimated to be 250 billion. In addition, there are also large platforms
    for professional "stock photos" (supplies of pre-produced images that
    are supposed to depict generic situations) and the databanks of
    professional agencies such Getty Images or Corbis. All of these images
    can be found easily and acquired quickly (though not always for free).
    Yet photography is not unique in this regard. In all fields, the number
    of cultural artifacts available to the public on specialized platforms
    has been increasing rapidly in recent years.[]{#Page_69 type="pagebreak"
    title="69"}
    :::

    ::: {.section}
    ### The great disorder {#c2-sec-0005}

    The old orders that had been responsible for filtering, organ­izing, and
    publishing cultural material -- culture industries, mass media,
    libraries, museums, archives, etc. -- are incapable of managing almost
    any aspect of this deluge. They can barely function as gatekeepers any
    more between those realms that, with their help, were once defined as
    "private" and "public." Their decisions about what is or is not
    important matter less and less. Moreover, having already been subjected
    to a decades-long critique, their rules, which had been relatively
    binding and formative over long periods of time, are rapidly losing
    practical significance.

    Even Europeana, a relatively small project based on trad­itional museums
    and archives and with a mandate to make the European cultural heritage
    available online, has contributed to the disintegration of established
    orders: it indiscriminately brings together 2,500 previously separated
    institutions. The specific semantic contexts that formerly shaped the
    history and orientation of institutions have been dissolved or reduced
    to dry meta-data, and millions upon millions of cultural artifacts are
    now equidistant from one another. Instead of certain artifacts being
    firmly anchored in a location, for instance in an ethnographic
    collection devoted to the colonial history of France, it is now possible
    for everything to exist side by side. Europeana is not an archive in the
    traditional sense, or even a museum with a fixed and meaningful order;
    rather, it is just a standard database. Everything in it is just one
    search request away, and every search generates a unique order in the
    form of a sequence of visible artifacts. As a result, individual objects
    are freed from those meta-narratives, created by the museums and
    archives that preserve them, which situate them within broader contexts
    and assign more or less clear meanings to them. They consequently become
    more open to interpretation. A search result does not articulate an
    interpretive field of reference but merely a connection, created by
    constantly changing search algorithms, between a request and the corpus
    of material, which is likewise constantly changing.

    Precisely because it offers so many different approaches to more or less
    freely combinable elements of information, []{#Page_70 type="pagebreak"
    title="70"}the order of the database no longer really provides a
    framework for interpreting search results in a meaningful way.
    Al­together, the meaning of many objects and signs is becoming even more
    uncertain. On the one hand, this is because the connection to their
    original context is becoming fragile; on the other hand, it is because
    they can appear in every possible combination and in the greatest
    variety of reception contexts. In less official archives and in less
    specialized search engines, the dissolution of context is far more
    pronounced than it is in the case of the Europeana project. For the sake
    of orienting its users, for instance, YouTube provides the date when a
    video has been posted, but there is no indication of when a video was
    actually produced. Further information provided about a video, for
    example in the comments section, is essentially unreliable. It might be
    true -- or it might not. The internet researcher David Weinberger has
    called this the "new digital disorder," which, at least for many users,
    is an entirely apt description.[^24^](#c2-note-0024){#c2-note-0024a} For
    individuals, this disorder has created both the freedom to establish
    their own orders and the obligation of doing so, regardless of whether
    or not they are ready for the task.

    This tension between freedom and obligation is at its strongest online,
    where the excess of culture and its more or less free availability are
    immediate and omnipresent. In fact, everything that can be retrieved
    online is culture in the sense that everything -- from the deepest layer
    of hardware to the most superficial tweet -- has been made by someone
    with a particular intention, and everything has been made to fit a
    particular order. And it is precisely this excess of often contradictory
    meanings and limited, regional, and incompatible orders that leads to
    disorder and meaninglessness. This is not limited to the online world,
    however, because the latter is not self-contained. In an essential way,
    digital media also serve to organize the material world. On the basis of
    extremely complex and opaque yet highly efficient logistical and
    production processes, people are also confronted with constantly
    changing material things about whose origins and meanings they have
    little idea. Even something as simple to produce as yoghurt usually has
    a thousand kilometers behind it before it ends up on a shelf in the
    supermarket. The logistics that enable this are oriented toward
    flexibility; []{#Page_71 type="pagebreak" title="71"}they bring elements
    together as efficiently as possible. It is nearly impossible for final
    customers to find out anything about the ingredients. Customers are
    merely supposed to be oriented by signs and notices such as "new" or "as
    before," "natural," and "healthy," which are written by specialists and
    meant to manipulate shoppers as much as the law allows. Even here, in
    corporeal everyday life, every individual has to deal with a surge of
    excess and disorder that threatens to erode the original meaning
    conferred on every object -- even where such meaning was once entirely
    unproblematic, as in the case of
    yoghurt.[^25^](#c2-note-0025){#c2-note-0025a}
    :::

    ::: {.section}
    ### Selecting and organizing {#c2-sec-0006}

    In this situation, the creation of one\'s own system of references has
    become a ubiquitous and generally accessible method for organizing all
    of the ambivalent things that one encounters on a given day. Such things
    are thus arranged within a specific context of meaning that also
    (co)determines one\'s own relation to the world and subjective position
    in it. Referentiality takes place through three types of activity, the
    first being simply to attract attention to certain things, which affirms
    (at least implicitly) that they are important. With every single picture
    posted on Flickr, every tweet, every blog post, every forum post, and
    every status update, the user is doing exactly that; he or she is
    communicating to others: "Look over here! I think this is important!" Of
    course, there is nothing new to filtering and allocating meaning. What
    is new, however, is that these processes are no longer being carried out
    primarily by specialists at editorial offices, museums, or archives, but
    have become daily requirements for a large portion of the population,
    regardless of whether they possess the material and cultural resources
    that are necessary for the task.
    :::

    ::: {.section}
    ### The loop through the body {#c2-sec-0007}

    Given the flood of information that perpetually surrounds everyone, the
    act of focusing attention and reducing vast numbers of possibilities
    into something concrete has become a productive achievement, however
    banal each of these micro-activities might seem on its own, and even if,
    at first, []{#Page_72 type="pagebreak" title="72"}the only concern might
    be to focus the attention of the person doing it. The value of this
    (often very brief) activity is that it singles out elements from the
    uniform sludge of unmanageable complexity. Something plucked out in this
    way gains value because it has required the use of a resource that
    cannot be reproduced, that exists outside of the world of information
    and that is invariably limited for every individual: our own lifetime.
    Every status update that is not machine-generated means that someone has
    invested time, be it only a second, in order to point to this and not to
    something else. Thus, a process of validating what exists in the excess
    takes place in connection with the ultimate scarcity -- our own
    lifetimes, our own bodies. Even if the value generated by this act is
    minimal or diffuse, it is still -- to borrow from Gregory Bateson\'s
    famous definition of information -- a difference that makes a difference
    in this stream of equivalencies and
    meaninglessness.[^26^](#c2-note-0026){#c2-note-0026a} This singling out
    -- this use of one\'s own body to generate meaning -- does not, however,
    take place by means of mere micro-activities throughout the day; it is
    also a defining aspect of complex cultural strategies. In recent years,
    re-enactment (that is, the re-staging of historical situ­ations and
    events) has established itself as a common practice in contemporary art.
    Unlike traditional re-enactments, such as those of historically
    significant battles, which attempt to represent the past as faithfully
    as possible, "artistic re-enactments," according to the curator Inke
    Arns, "are not an affirmative confirmation of the past; rather, they are
    *questionings* of the present through reaching back to historical
    events," especially as they are represented in images and other forms of
    documentation. Thanks to search engines and databases, such
    representations are more or less always present, though in the form of
    indeterminate images, ambivalent documents, and contentious
    interpretations. Artists in this situation, as Arns explains,

    ::: {.extract}
    do not ask the naïve question about what really happened outside of the
    history represented in the media -- the "authenticity" beyond the images
    -- instead, they ask what the images we see might mean concretely to us,
    if we were to experience these situations personally. In this way the
    artistic reenactment confronts the general feeling of insecurity about
    the meaning []{#Page_73 type="pagebreak" title="73"}of images by using a
    paradoxical approach: through erasing distance to the images and at the
    same time distancing itself from the
    images.[^27^](#c2-note-0027){#c2-note-0027a}
    :::

    This paradox manifests itself in that the images are appropriated and
    sublated through the use of one\'s own body in the re-enactments. They
    simultaneously refer to the past and create a new reality in the
    present. In perhaps the best-known re-enactment of this type, the artist
    Jeremy Deller revived, in 2001, the Battle of Orgreave, one of the
    central episodes of the British miners\' strike of 1984 and 1985. This
    historical event is regarded as a turning point in the protracted
    conflict between Margaret Thatcher\'s government and the labor unions --
    a key moment in the implementation of Great Britain\'s neoliberal
    regime, which is still in effect today. In Deller\'s re-enactment, the
    heart of the matter is not historical accuracy, which is always
    controversial in such epoch-changing events. Rather, he focuses on the
    former participants -- the miners and police officers alike, who, along
    with non-professional actors, lived through the situation again -- in
    order to explore both the distance from the events and their
    representation in the media, as well as their ongoing biographical and
    societal presence.[^28^](#c2-note-0028){#c2-note-0028a}

    Elaborate practices of embodying medial images through processes of
    appropriation and distancing have also found their way into popular
    culture, for instance in so-called "cosplay." The term, which is a
    contraction of the words "costume" and "play," was coined by a Japanese
    man named Nobuyuki Takahashi. In 1984, while attending the World Science
    Fiction Convention in Los Angeles, he used the word to describe the
    practice of certain attendees to dress up as their favorite characters.
    Participants in cosplay embody fictitious figures -- mostly from the
    worlds of science fiction, comics/manga, or computer games -- by donning
    home-made costumes and striking characteristic
    poses.[^29^](#c2-note-0029){#c2-note-0029a} The often considerable
    effort that goes into this is mostly reflected in the costumes, not in
    the choreography or dramaturgy of the performance. What is significant
    is that these costumes are usually not exact replicas but are rather
    freely adapted by each player to represent the character as he or she
    interprets it to be. Accordingly, "Cosplay is a form of appropriation
    []{#Page_74 type="pagebreak" title="74"}that transforms, actualizes and
    performs an existing story in close connection to the fan\'s own
    identity."[^30^](#c2-note-0030){#c2-note-0030a} This practice,
    admittedly, goes back quite far in the history of fan culture, but it
    has experienced a striking surge through the opportunity for fans to
    network with one another around the world, to produce costumes and
    images of professional quality, and to place themselves on the same
    level as their (fictitious) idols. By now it has become a global
    subculture whose members are active not only online but also at hundreds
    of conventions throughout the world. In Germany, an annual cosplay
    competition has been held since 2007 (it is organized by the Frankfurt
    Book Fair and Animexx, the country\'s largest manga and anime
    community). The scene, which has grown and branched out considerably
    over the past few years, has slowly begun to professionalize, with
    shops, books, and players who make paid appearances. Even in fan
    culture, stars are born. As soon as the subculture has exceeded a
    certain size, this gradual onset of commercialization will undoubtedly
    lead to tensions within the community. For now, however, two of its
    noteworthy features remain: the power of the desire to appropriate, in a
    bodily manner, characters from vast cultural universes, and the
    widespread combination of free interpretation and meticulous attention
    to detail.
    :::

    ::: {.section}
    ### Lineages and transformations {#c2-sec-0008}

    Because of the great effort tha they require, re-enactment and cosplay
    are somewhat extreme examples of singling out, appropriating, and
    referencing. As everyday activities that almost take place incidentally,
    however, these three practices usually do not make any significant or
    lasting differences. Yet they do not happen just once, but over and over
    again. They accumulate and thus constitute referentiality\'s second type
    of activity: the creation of connections between the many things that
    have attracted attention. In such a way, paths are forged through the
    vast complexity. These paths, which can be formed, for instance, by
    referring to different things one after another, likewise serve to
    produce and filter meaning. Things that can potentially belong in
    multiple contexts are brought into a single, specific context. For the
    individual []{#Page_75 type="pagebreak" title="75"}producer, this is how
    fields of attention, reference systems, and contexts of meaning are
    first established. In the third step, the things that have been selected
    and brought together are changed. Perhaps something is removed to modify
    the meaning, or perhaps something is added that was previously absent or
    unavailable. Either way, referential culture is always producing
    something new.

    These processes are applied both within individual works (referentiality
    in a strict sense) and within currents of communication that consist of
    numerous molecular acts (referentiality in a broader sense). This latter
    sort of compilation is far more widespread than the creation of new
    re-mix works. Consider, for example, the billionfold sequences of status
    updates, which sometimes involve a link to an interesting video,
    sometimes a post of a photograph, then a short list of favorite songs, a
    top 10 chart from one\'s own feed, or anything else. Such methods of
    inscribing oneself into the world by means of references, combinations,
    or alterations are used to create meaning through one\'s own activity in
    the world and to constitute oneself in it, both for one\'s self and for
    others. In a culture that manifests itself to a great extent through
    mediatized communication, people have to constitute themselves through
    such acts, if only by posting
    "selfies."[^31^](#c2-note-0031){#c2-note-0031a} Not to do so would be to
    risk invisibility and being forgotten.

    On this basis, a genuine digital folk culture of re-mixing and mashups
    has formed in recent years on online platforms, in game worlds, but also
    through cultural-economic productions of individual pieces or short
    series. It is generated and maintained by innumerable people with
    varying degrees of intensity and ambition. Its common feature with
    trad­itional folk culture, in choirs or elsewhere, is that production
    and reception (but also reproduction and creation) largely coincide.
    Active participation admittedly requires a certain degree of
    proficiency, interest, and engagement, but usually not any extraordinary
    talent. Many classical institutions such as museums and archives have
    been attempting to take part in this folk culture by setting up their
    own re-mix services. They know that the "public" is no longer able or
    willing to limit its engagement with works of art and cultural history
    to one of quiet contemplation. At the end of 2013, even []{#Page_76
    type="pagebreak" title="76"}the Deutsches Symphonie-Orchester Berlin
    initiated a re-mix competition. A year earlier, the Rijksmuseum in
    Amsterdam launched so-called "Rijksstudios." Since then, the museum has
    made available on its website more than 200,000 high-resolution images
    from its collection. Users are free to use these to create their own
    re-mixes online and share them with others. Interestingly, the
    Rijksmuseum does not distinguish between the work involved in
    transforming existing pieces and that involved in curating its own
    online gallery.

    Referential processes have no beginning and no end. Any material that is
    used to make something new has a pre-history of its own, even if its
    traces are lost in clouds of uncertainty. Upon closer inspection, this
    cloud might clear a little bit, but it is extremely uncommon for a
    genuine beginning -- a *creatio ex nihilo* -- to be revealed. This
    raises the question of whether there can really be something like
    originality in the emphatic sense.[^32^](#c2-note-0032){#c2-note-0032a}
    Regardless of the answer to this question, the fact that by now many
    people select, combine, and alter objects on a daily basis has led to a
    slow shift in our perception and sensibilities. In light of the
    experiences that so many people are creating, the formerly exotic
    theories of deconstruction suddenly seem anything but outlandish. Nearly
    half a century ago, Roland Barthes defined the text as a fabric of
    quotations, and this incited vehement
    opposition.[^33^](#c2-note-0033){#c2-note-0033a} "But of course," one
    would be inclined to say today, "that can be statistically proven
    through software analysis!" Amazon identifies books by means of their
    "statistically improbable phrases"; that is, by means of textual
    elements that are highly unlikely to occur elsewhere. This implies, of
    course, that books contain many textual elements that are highly likely
    to be found in other texts, without suggesting that such elements would
    have to be regarded as plagiarism.

    In the Gutenberg Galaxy, with its fixation on writing, the earliest
    textual document is usually understood to represent a beginning. If no
    references to anything before can be identified, the text is then
    interpreted as a closed entity, as a new text. Thus, fairy tales and
    sagas, which are typical elements of oral culture, are still more
    strongly associated with the names of those who recorded them than with
    the names of those who narrated them. This does not seem very convincing
    today. In recent years, literary historians have made strong []{#Page_77
    type="pagebreak" title="77"}efforts to shift the focus of attention to
    the people (mostly women) who actually told certain fairy tales. In
    doing so, they have been able to work out to what extent the respective
    narrators gave shape to specific stories, which were written down as
    common versions, and to what extent these stories reflect their
    narrators\' personal histories.[^34^](#c2-note-0034){#c2-note-0034a}

    Today, after more than 40 years of deconstructionist theory and a change
    in our everyday practices, it is no longer controversial to read works
    -- even by canonical figures like Wagner or Mozart -- in such a way as
    to highlight the other works, either by the artists in question or by
    other artists, that are contained within
    them.[^35^](#c2-note-0035){#c2-note-0035a} This is not an expression of
    decreased appreciation but rather an indication that, as Zygmunt Bauman
    has stressed, "The way human beings understand the world tends to be at
    all times *praxeomorphic*: it is always shaped by the know-how of the
    day, by what people can do and how they usually go about doing
    it."[^36^](#c2-note-0036){#c2-note-0036a} And the everyday practice of
    today is one of singling out, bringing together, altering, and adding.
    Accordingly, not only has our view of current cultural production
    shifted; our view of cultural history has shifted as well. As always,
    the past is made to suit the sensibilities of the present.

    As a rule, however, things that have no beginning also have no end. This
    is not only because they can in turn serve as elements for other new
    contexts of meaning, but also because the attention paid to the context
    in which they take on specific meaning is sensitive to the work that has
    to be done to maintain the context itself. Even timelessness is an
    elaborate everyday business. The attempt to rescue works of art from the
    ravages of time -- to preserve them forever -- means that they regularly
    need to be restored. Every restoration inevit­ably stirs a debate about
    whether the planned interventions are appropriate and about how to deal
    with the traces of previous interventions, which, from the current
    perspective, often seem to be highly problematic. Whereas, just a
    generation ago, preservationists ensured that such interventions
    remained visible (as articulations of the historical fissures that are
    typical of Modernity), today greater emphasis is placed on reducing
    their visibility and re-creating the illusion of an "original condition"
    (without, however, impeding any new functionality that a piece might
    have in the present). []{#Page_78 type="pagebreak" title="78"}The
    historically faithful restoration of the Berlin City Palace, and yet its
    repurposed function as a museum and meeting place, are typical of this
    new attitude in dealing with our historical heritage.

    In everyday activity, too, the never-ending necessity of this work can
    be felt at all times. Here the issue is not timelessness, but rather
    that the established contexts of meaning quickly become obsolete and
    therefore have to be continuously affirmed, expanded, and changed in
    order to maintain the relevance of the field that they define. This
    lends referentiality a performative character that combines productive
    and reproductive dimensions. That which is not constantly used and
    renewed simply disappears. Often, however, this only means that it will
    sink into an endless archive and become unrealized potential until
    someone reactivates it, breathes new life into it, rouses it from its
    slumber, and incorporates it into a newly relevant context of meaning.
    "To be relevant," according to the artist Eran Schaerf, "things must be
    recyclable."[^37^](#c2-note-0037){#c2-note-0037a}

    Alone, everyone is overwhelmed by the task of having to generate meaning
    against this backdrop of all-encompassing meaninglessness. First, the
    challenge is too great for any individual to overcome; second, meaning
    itself is only created intersubjectively. While it can admittedly be
    asserted by a single person, others have to confirm it before it can
    become a part of culture. For this reason, the actual subject of
    cultural production under the digital condition is not the individual
    but rather the next-largest unit.
    :::
    :::

    ::: {.section}
    Communality {#c2-sec-0009}
    -----------

    As an individual, it is impossible to orient oneself within a complex
    environment. Meaning -- as well as the ability to act -- can only be
    created, reinforced, and altered in exchange with others. This is
    nothing noteworthy; biologically and culturally, people are social
    beings. What has changed historically is how people are integrated into
    larger contexts, how processes of exchange are organized, and what every
    individual is expected to do in order to become a fully fledged
    participant in these processes. For nearly 50 years, traditional
    []{#Page_79 type="pagebreak" title="79"}institutions -- that is,
    hierarchically and bureaucratically organ­ized civic institutions such
    as established churches, labor unions, and political parties -- have
    continuously been losing members.[^38^](#c2-note-0038){#c2-note-0038a}
    In tandem with this, the overall commitment to the identities, family
    values, and lifestyles promoted by these institutions has likewise been
    in decline. The great mech­anisms of socialization from the late stages
    of the Gutenberg Galaxy have been losing more and more of their
    influence, though at different speeds and to different extents. All
    told, however, explicitly and collectively normative impulses are
    decreasing, while others (implicitly economic, above all) are on the
    rise. According to mainstream sociology, a cause or consequence of this
    is the individualization and atomization of society. As early as the
    middle of the 1980s, Ulrich Beck claimed: "In the individualized society
    the individual must therefore learn, on pain of permanent disadvantage,
    to conceive of himself or herself as the center of action, as the
    planning office with respect to his/her own biography, abil­ities,
    orientations, relationships and so
    on."[^39^](#c2-note-0039){#c2-note-0039a} Over the past three decades,
    the dominant neoliberal political orientation, with its strong stress on
    the freedom of the individual -- to realize oneself as an individual
    actor in the allegedly open market and in opposition to allegedly
    domineering collective mechanisms -- has radicalized these tendencies
    even further. The ability to act, however, is not only a question of
    one\'s personal attitude but also of material resources. And it is this
    same neoliberal politics that deprives so many people of the resources
    needed to take advantage of these new freedoms in their own lives. As a
    result they suffer, in Ulrich Beck\'s terms, "permanent disadvantage."

    Under the digital condition, this process has permeated the finest
    structures of social life. Individualization, commercialization, and the
    production of differences (through design, for instance) are ubiquitous.
    Established civic institutions are not alone in being hollowed out;
    relatively new collectives are also becoming more differentiated, a
    development that I outlined above with reference to the transformation
    of the gay movement into the LGBT community. Yet nevertheless, or
    perhaps for this very reason, new forms of communality are being formed
    in these offshoots -- in the small activities of everyday life. And
    these new communal formations -- rather []{#Page_80 type="pagebreak"
    title="80"}than individual people -- are the actual subjects who create
    the shared meaning that we call culture.

    ::: {.section}
    ### The problem of the "community" {#c2-sec-0010}

    I have chosen the rather cumbersome expression "communal formation" in
    order to avoid the term "community" (*Gemeinschaft*), although the
    latter is used increasingly often in discussions of digital cultures and
    has played an import­ant role, from the beginning, in conceptions of
    networking. Viewed analytically, however, "community" is a problematic
    term because it is almost hopelessly overloaded. Particularly in the
    German-speaking tradition, Ferdinand Tönnies\'s polar distinction
    between "community" (*Gemeinschaft*) and "society" (*Gesellschaft*),
    which he introduced in 1887, remains
    influential.[^40^](#c2-note-0040){#c2-note-0040a} Tönnies contrasted two
    fundamentally different and exclusive types of social relations. Whereas
    community is characterized by the overlapping multidimensional nature of
    social relationships, society is defined by the functional separation of
    its sectors and spheres. Community embeds every individual into complex
    social relationships, all of which tend to be simultaneously present. In
    the traditional village community ("communities of place," in Tönnies\'s
    terms), neighbors are involved with one another, for better or for
    worse, both on a familiar basis and economically or religiously. Every
    activity takes place on several different levels at the same time.
    Communities are comprehensive social institutions that penetrate all
    areas of life, endowing them with meaning. Through mutual dependency,
    they create stability and security, but they also obstruct change and
    hinder social mobility. Because everyone is connected with each other,
    no can leave his or her place without calling into question the
    arrangement as a whole. Communities are thus structurally conservative.
    Because every human activity is embedded in multifaceted social
    relationships, every change requires adjustments across the entire
    interrelational web -- a task that is not easy to accomplish.
    Accordingly, the trad­itional communities of the eighteenth and
    nineteenth centuries fiercely opposed the establishment of capitalist
    society. In order to impose the latter, the old community structures
    were broken apart with considerable violence. This is what Marx
    []{#Page_81 type="pagebreak" title="81"}and Engels were referring to in
    that famous passage from *The Communist Manifesto*: "All the settled,
    age-old relations with their train of time-honoured preconceptions and
    viewpoints are dissolved. \[...\] Everything feudal and fixed goes up in
    smoke, everything sacred is
    profaned."[^41^](#c2-note-0041){#c2-note-0041a}

    The defining feature of society, on the contrary, is that it frees the
    individual from such multifarious relationships. Society, according to
    Tönnies, separates its members from one another. Although they
    coordinate their activity with others, they do so in order to pursue
    partial, short-term, and personal goals. Not only are people separated,
    but so too are different areas of life. In a market-oriented society,
    for instance, the economy is conceptualized as an independent sphere. It
    can therefore break away from social connections to be organized simply
    by limited formal or legal obligations between actors who, beyond these
    obligations, have nothing else to do with one another. Costs or benefits
    that inadvertently affect people who are uninvolved in a given market
    transaction are referred to by economists as "externalities," and market
    participants do not need to care about these because they are strictly
    pursuing their own private interests. One of the consequences of this
    form of social relationship is a heightened social dynamic, for now it
    is possible to introduce changes into one area of life without
    considering its effects on other areas. In the end, the dissolution of
    mutual obligations, increased uncertainty, and the reduction of many
    social connections go hand in hand with what Marx and Engels referred to
    in *The Communist Manifesto* as "unfeeling hard cash."

    From this perspective, the historical development looks like an
    ambivalent process of modernization in which society (dynamic, but cold)
    is erected over the ruins of community (static, but warm). This is an
    unusual combination of romanticism and progress-oriented thinking, and
    the problems with this influential perspective are numerous. There is,
    first, the matter of its dichotomy; that is, its assumption that there
    can only be these two types of arrangement, community and society. Or
    there is the notion that the one form can be completely ousted by the
    other, even though aspects of community and aspects of society exist at
    the same time in specific historical situations, be it in harmony or in
    conflict.[^42^](#c2-note-0042){#c2-note-0042a} []{#Page_82
    type="pagebreak" title="82"}These impressions, however, which are so
    firmly associated with the German concept of *Gemeinschaft*, make it
    rather difficult to comprehend the new forms of communality that have
    developed in the offshoots of networked life. This is because, at least
    for now, these latter forms do not represent a genuine alternative to
    societal types of social
    connectedness.[^43^](#c2-note-0043){#c2-note-0043a} The English word
    "community" is somewhat more open. The opposition between community and
    society resonates with it as well, although the dichotomy is not as
    clear-cut. American communitarianism, for instance, considers the
    difference between community and society to be gradual and not
    categorical. Its primary aim is to strengthen civic institutions and
    mechanisms, and it regards community as an intermediary level between
    the individual and society.[^44^](#c2-note-0044){#c2-note-0044a} But
    there is a related English term, which seems even more productive for my
    purposes, namely "community of practice," a concept that is more firmly
    grounded in the empirical observation of concrete social relationships.
    The term was introduced at the beginning of the 1990s by the social
    researchers Jean Lave and Étienne Wenger. They observed that, in most
    cases, professional learning (for instance, in their case study of
    midwives) does not take place as a one-sided transfer of knowledge or
    proficiency, but rather as an open exchange, often outside of the formal
    learning environment, between people with different levels of knowledge
    and experience. In this sense, learning is an activity that, though
    distinguishable, cannot easily be separated from other "normal"
    activities of everyday life. As Lave and Wenger stress, however, the
    community of practice is not only a social space of exchange; it is
    rather, and much more fundamentally, "an intrinsic condition for the
    existence of knowledge, not least because it provides the interpretive
    support necessary for making sense of its
    heritage."[^45^](#c2-note-0045){#c2-note-0045a} Communities of practice
    are thus always epistemic communities that form around certain ways of
    looking at the world and one\'s own activity in it. What constitutes a
    community of practice is thus the joint acquisition, development, and
    preservation of a specific field of practice that contains abstract
    knowledge, concrete proficiencies, the necessary material and social
    resources, guidelines, expectations, and room to interpret one\'s own
    activity. All members are active participants in the constitution of
    this field, and this reinforces the stress on []{#Page_83
    type="pagebreak" title="83"}practice. Each of them, however, brings
    along different presuppositions and experiences, for their situations
    are embedded within numerous and specific situations of life or work.
    The processes within the community are mostly informal, and yet they are
    thoroughly structured, for authority is distributed unequally and is
    based on the extent to which the members value each other\'s (and their
    own) levels of knowledge and experience. At first glance, then, the term
    "community of practice" seems apt to describe the meaning-generating
    communal formations that are at issue here. It is also somewhat
    problematic, however, because, having since been subordinated to
    management strategies, its use is now narrowly applied to professional
    learning and managing knowledge.[^46^](#c2-note-0046){#c2-note-0046a}

    From these various notions of community, it is possible to develop the
    following way of looking at new types of communality: they are formed in
    a field of practice, characterized by informal yet structured exchange,
    focused on the generation of new ways of knowing and acting, and
    maintained through the reflexive interpretation of their own activity.
    This last point in particular -- the communal creation, preservation,
    and alteration of the interpretive framework in which actions,
    processes, and objects acquire a firm meaning and connection -- can be
    seen as the central role of communal formations.

    Communication is especially significant to them. Indi­viduals must
    continuously communicate in order to constitute themselves within the
    fields and practices, or else they will remain invisible. The mass of
    tweets, updates, emails, blogs, shared pictures, texts, posts on
    collaborative platforms, and databases (etc.) that are necessary for
    this can only be produced and processed by means of digital
    technologies. In this act of incessant communication, which is a
    constitutive element of social existence, the personal desire for
    self-constitution and orientation becomes enmeshed with the outward
    pressure of having to be present and available to form a new and binding
    set of requirements. This relation between inward motivation and outward
    pressure can vary highly, depending on the character of the communal
    formation and the position of the individual within it (although it is
    not the individual who determines what successful communication is, what
    represents a contribution to the communal formation, or in which form
    one has to be present). []{#Page_84 type="pagebreak" title="84"}Such
    decisions are made by other members of the formation in the form of
    positive or negative feedback (or none at all), and they are made with
    recourse to the interpretive framework that has been developed in
    common. These communal and continuous acts of learning, practicing, and
    orientation -- the exchange, that is, between "novices" and "experts" on
    the same field, be it concerned with internet politics, illegal street
    racing, extreme right-wing music, body modification, or a free
    encyclopedia -- serve to maintain the framework of shared meaning,
    expand the constituted field, recruit new members, and adapt the
    framework of interpretation and activity to changing conditions. Such
    communal formations constitute themselves; they preserve and modify
    themselves by constantly working out the foundations of their
    constitution. This may sound circular, for the process of reflexive
    self-constitution -- "autopoiesis" in the language of systems theory --
    is circular in the sense that control is maintained through continuous,
    self-generating feedback. Self-referentiality is a structural feature of
    these formations.
    :::

    ::: {.section}
    ### Singularity and communality {#c2-sec-0011}

    The new communal formations are informal forms of organ­ization that are
    based on voluntary action. No one is born into them, and no one
    possesses the authority to force anyone else to join or remain against
    his or her will, or to assign anyone with tasks that he or she might be
    unwilling to do. Such a formation is not an enclosed disciplinary
    institution in Foucault\'s sense,[^47^](#c2-note-0047){#c2-note-0047a}
    and, within it, power is not exercised through commands, as in the
    classical sense formulated by Max
    Weber.[^48^](#c2-note-0048){#c2-note-0048a} The condition of not being
    locked up and not being subordinated can, at least at first, represent
    for the individual a gain in freedom. Under a given set of conditions,
    everyone can (and must) choose which formations to participate in, and
    he or she, in doing so, will have a better or worse chance to influence
    the communal field of reference.

    On the everyday level of communicative self-constitution and creating a
    personal cognitive horizon -- in innumerable streams, updates, and
    timelines on social mass media -- the most important resource is the
    attention of others; that is, their feedback and the mutual recognition
    that results from it. []{#Page_85 type="pagebreak" title="85"}And this
    recognition may simply be in the form of a quickly clicked "like," which
    is the smallest unit that can assure the sender that, somewhere out
    there, there is a receiver. Without the latter, communication has no
    meaning. The situation is somewhat menacing if no one clicks the "like"
    button beneath a post or a photo. It is a sign that communication has
    broken, and the result is the dissolution of one\'s own communicatively
    constituted social existence. In this context, the boundaries are
    blurred between the categories of information, communication, and
    activity. Making information available always involves the active --
    that is, communicating -- person, and not only in the case of ubiquitous
    selfies, for in an overwhelming and chaotic environment, as discussed
    above, selection itself is of such central importance that the
    differences between the selected and the selecting become fluid,
    particularly when the goal of the latter is to experience confirmation
    from others. In this back-and-forth between one\'s own presence and the
    validation of others, one\'s own motives and those of the community are
    not in opposition but rather mutually depend on one another. Condensed
    to simple norms and to a basic set of guidelines within the context of
    an image-oriented social mass media service, the rule (or better:
    friendly tip) that one need not but probably ought to follow is this:

    ::: {.extract}
    Be an active member of the Instagram community to receive likes and
    comments. Take time to comment on a friend\'s photo, or to like photos.
    If you do this, others will reciprocate. If you never acknowledge your
    followers\' photos, then they won\'t acknowledge
    you.[^49^](#c2-note-0049){#c2-note-0049a}
    :::

    The context of this widespread and highly conventional piece of advice
    is not, for instance, a professional marketing campaign; it is simply
    about personally positioning oneself within a social network. The goal
    is to establish one\'s own, singular, identity. The process required to
    do so is not primarily inward-oriented; it is not based on questions
    such as: "Who am I really, apart from external influences?" It is rather
    outward-oriented. It takes place through making connections with others
    and is concerned with questions such as: "Who is in my network, and what
    is my position within it?" It is []{#Page_86 type="pagebreak"
    title="86"}revealing that none of the tips in the collection cited above
    offers advice about achieving success within a community of
    photographers; there are not suggestions, for instance, about how to
    take high-quality photographs. With smart cameras and built-in filters
    for post-production, this is not especially challenging any more,
    especially because individual pictures, to be examined closely and on
    their own terms, have become less important gauges of value than streams
    of images that are meant to be quickly scrolled through. Moreover, the
    function of the critic, who once monopolized the right to interpret and
    evaluate an image for everyone, is no longer of much significance.
    Instead, the quality of a picture is primarily judged according to
    whether "others like it"; that is, according to its performance in the
    ongoing popularity contest within a specific niche. But users do not
    rely on communal formations and the feedback they provide just for the
    sharing and evaluation of pictures. Rather, this dynamic has come to
    determine more and more facets of life. Users experience the
    constitution of singularity and communality, in which a person can be
    perceived as such, as simultaneous and reciprocal processes. A million
    times over and nearly subconsciously (because it is so commonplace),
    they engage in a relationship between the individual and others that no
    longer really corresponds to the liberal opposition between
    individuality and society, between personal and group identity. Instead
    of viewing themselves as exclusive entities (either in terms of the
    emphatic affirmation of individuality or its dissolution within a
    homogeneous group), the new formations require that the production of
    difference and commonality takes place
    simultaneously.[^50^](#c2-note-0050){#c2-note-0050a}
    :::

    ::: {.section}
    ### Authenticity and subjectivity {#c2-sec-0012}

    Because members have decided to participate voluntarily in the
    community, their expressions and actions are regarded as authentic, for
    it is implicitly assumed that, in making these gestures, they are not
    following anyone else\'s instructions but rather their own motivations.
    The individual does not act as a representative or functionary of an
    organization but rather as a private and singular (that is, unique)
    person. While at a gathering of the Occupy movement, a sure way to be
    kicked out to is to stick stubbornly to a party line, even if this way
    []{#Page_87 type="pagebreak" title="87"}of thinking happens to agree
    with that of the movement. Not only at Occupy gatherings, however, but
    in all new communal formations it is expected that everyone there is
    representing his or her own interests. As most people are aware, this
    assumption is theoretically naïve and often proves to be false in
    practice. Even spontaneity can be calculated, and in many cases it is.
    Nevertheless, the expectation of authenticity is relevant because it
    creates a minimum of trust. As the basis of social trust, such
    contra-factual expectations exist elsewhere as well. Critical readers of
    newspapers, for instance, must assume that what they are reading has
    been well researched and is presented as objectively as possible, even
    though they know that objectivity is theoretically a highly problematic
    concept -- to this extent, postmodern theory has become common knowledge
    -- and that newspapers often pursue (hidden) interests or lead
    campaigns. Yet without such contra-factual assumptions, the respective
    orders of knowledge and communication would not function, for they
    provide the normative framework within which deviations can be
    perceived, criticized, and sanctioned.

    In a seemingly traditional manner, the "authentic self" is formulated
    with reference to one\'s inner world, for instance to personal
    knowledge, interests, or desires. As the core of personality, however,
    this inner world no longer represents an immutable and essential
    characteristic but rather a temporary position. Today, even someone\'s
    radical reinvention can be regarded as authentic. This is the central
    difference from the classical, bourgeois conception of the subject. The
    self is no longer understood in essentialist terms but rather
    performatively. Accordingly, the main demand on the individual who
    voluntarily opts to participate in a communal formation is no longer to
    be self-aware but rather to be
    self-motivated.[^51^](#c2-note-0051){#c2-note-0051a} Nor is it necessary
    any more for one\'s core self to be coherent. It is not a contradiction
    to appear in various communal formations, each different from the next,
    as a different "I myself," for every formation is comprehensive, in that
    it appeals to the whole person, and simultaneously partial, in that it
    is oriented toward a particular goal and not toward all areas of life.
    As in the case of re-mixes and other referential processes, the concern
    here is not to preserve authenticity but rather to create it in the
    moment. The success or failure []{#Page_88 type="pagebreak"
    title="88"}of these efforts is determined by the continuous feedback of
    others -- one like after another.

    These practices have led to a modified form of subject constitution for
    which some sociologists, engaged in empir­ical research, have introduced
    the term "networked individualism."[^52^](#c2-note-0052){#c2-note-0052a}
    The idea is based on the observation that people in Western societies
    (the case studies were mostly in North America) are defining their
    identity less and less by their family, profession, or other stable
    collective, but rather increasingly in terms of their personal social
    networks; that is, according to the communal formations in which they
    are active as individuals and in which they are perceived as singular
    people. In this regard, individualization and atomization no longer
    necessarily go hand in hand. On the contrary, the intertwined nature of
    personal identity and communality can be experienced on an everyday
    level, given that both are continuously created, adapted, and affirmed
    by means of personal communication. This makes the networks in question
    simultaneously fragile and stable. Fragile because they require the
    ongoing presence of every individual and because communication can break
    down quickly. Stable because the networks of relationships that can
    support a single person -- as regards the number of those included,
    their geograph­ical distribution, and the duration of their cohesion --
    have expanded enormously by means of digital communication technologies.

    Here the issue is not that of close friendships, whose number remains
    relatively constant for most people and over long periods of
    time,[^53^](#c2-note-0053){#c2-note-0053a} but rather so-called "weak
    ties"; that is, more or less loose acquaintances that can be tapped for
    new information and resources that do not exist within one\'s close
    circle of friends.[^54^](#c2-note-0054){#c2-note-0054a} The more they
    are expanded, the more sustainable and valuable these networks become,
    for they bring together a large number of people and thus multiply the
    material and organizational resources that are (potentially) accessible
    to the individual. It is impossible to make a sweeping statement as to
    whether these formations actually represent communities in a
    comprehensive sense and how stable they really are, especially in times
    of crisis, for this is something that can only be found out on a
    case-by-case basis. It is relevant that the development of personal
    networks []{#Page_89 type="pagebreak" title="89"}has not taken place in
    a vacuum. The disintegration of institutions that were formerly
    influential in the formation of identity and meaning began long before
    the large-scale spread of networks. For most people, there is no other
    choice but to attempt to orient and organize oneself, regardless of how
    provisional or uncertain this may be. Or, as Manuel Castells somewhat
    melodramatically put it, "At the turn of the millennium, the king and
    the queen, the state and civil society, are both naked, and their
    children-citizens are wandering around a variety of foster
    homes."[^55^](#c2-note-0055){#c2-note-0055a}
    :::

    ::: {.section}
    ### Space and time as a communal practice {#c2-sec-0013}

    Although participation in a communal formation is voluntary, it is not
    unselfish. Quite the contrary: an important motivation is to gain access
    to a formation\'s constitutive field of practice and to the resources
    associated with it. A communal formation ultimately does more than
    simply steer the attention of its members toward one another. Through
    the common production of culture, it also structures how the members
    perceive the world and how they are able to design themselves and their
    potential actions in it. It is thus a co­operative mechanism of
    filtering, interpretation, and constitution. Through the everyday
    referential work of its members, the community selects a manageable
    amount of information from the excess of potentially available
    information and brings it into a meaningful context, whereby it
    validates the selection itself and orients the activity of each of its
    members.

    The new communal formations consist of self-referential worlds whose
    constructive common practice affects the foundations of social activity
    itself -- the constitution of space and time. How? The spatio-temporal
    horizon of digital communication is a global (that is, placeless) and
    ongoing present. The technical vision of digital communication is always
    the here and now. With the instant transmission of information,
    everything that is not "here" is inaccessible and everything that is not
    "now" has disappeared. Powerful infrastructure has been built to achieve
    these effects: data centers, intercontinental networks of cables,
    satellites, high-performance nodes, and much more. Through globalized
    high-frequency trading, actors in the financial markets have realized
    this []{#Page_90 type="pagebreak" title="90"}technical vision to its
    broadest extent by creating a never-ending global present whose expanse
    is confined to milliseconds. This process is far from coming to an end,
    for massive amounts of investment are allocated to accomplish even the
    smallest steps toward this goal. On November 3, 2015, a 4,600-kilometer,
    300-million-dollar transatlantic telecommunications cable (Hibernia
    Express) was put into operation between London and New York -- the first
    in more than 10 years -- with the single goal of accelerating automated
    trading between the two places by 5.2 milliseconds.

    For social and biological processes, this technical horizon of space and
    time is neither achievable nor desirable. Such processes, on the
    contrary, are existentially dependent on other spatial and temporal
    orders. Yet because of the existence of this non-geographical and
    atemporal horizon, the need -- as well as the possibility -- has arisen
    to redefine the parameters of space and time themselves in order to
    counteract the mire of technically defined spacelessness and
    timelessness. If space and time are not simply to vanish in this
    spaceless, ongoing present, how then should they be defined? Communal
    formations create spaces for action not least by determining their own
    geographies and temporal rhythms. They negotiate what is near and far
    and also which places are disregarded (that is, not even perceived). If
    every place is communicatively (and physically) reachable, every person
    must decide which place he or she would like to reach in practice. This,
    however, is not an individual decision but rather a task that can only
    be approached collectively. Those places which are important and thus
    near are determined by communal formations. This takes place in the form
    of a rough consensus through the blogs that "one" has to read, the
    exhibits that "one" has to see, the events and conferences that "one"
    has to attend, the places that "one" has to visit before they are
    overrun by tourists, the crises in which "the West" has to intervene,
    the targets that "lend themselves" to a terrorist attack, and so on. On
    its own, however, selection is not enough. Communal formations are
    especially powerful when they generate the material and organizational
    resources that are necessary for their members to implement their shared
    worldview through actions -- to visit, for instance, the places that
    have been chosen as important. This can happen if they enable access
    []{#Page_91 type="pagebreak" title="91"}to stipends, donations, price
    reductions, ride shares, places to stay, tips, links, insider knowledge,
    public funds, airlifts, explosives, and so on. It is in this way that
    each formation creates its respective spatial constructs, which define
    distances in a great variety of ways. At the same time that war-torn
    Syria is unreachably distant even for seasoned reporters and their
    staff, veritable travel agencies are being set up in order to bring
    Western jihadists there in large numbers.

    Things are similar for the temporal dimensions of social and biological
    processes. Permanent presence is a temporality that is inimical to life
    but, under its influence, temporal rhythms have to be redefined as well.
    What counts as fast? What counts as slow? In what order should things
    proceed? On the everyday level, for instance, the matter can be as
    simple as how quickly to respond to an email. Because the transmission
    of information hardly takes any time, every delay is a purely social
    creation. But how much is acceptable? There can be no uniform answer to
    this. The members of each communal formation have to negotiate their own
    rules with one another, even in areas of life that are otherwise highly
    formalized. In an interview with the magazine *Zeit*, for instance, a
    lawyer with expertise in labor law was asked whether a boss may require
    employees to be reachable at all times. Instead of answering by
    referring to any binding legal standards, the lawyer casually advised
    that this was a matter of flexible negotiation: "Express your misgivings
    openly and honestly about having to be reachable after hours and,
    together with your boss, come up with an agreeable rule to
    follow."[^56^](#c2-note-0056){#c2-note-0056a} If only it were that easy.

    Temporalities that, in many areas, were once simply taken for granted by
    everyone on account of the factuality of things now have to be
    culturally determined -- that is, explicitly negotiated -- in a greater
    number of contexts. Under the conditions of capitalism, which is always
    creating new competitions and incentives, one consequence is the
    often-lamented "acceleration of time." We are asked to produce, consume,
    or accomplish more and more in less and less
    time.[^57^](#c2-note-0057){#c2-note-0057a} This change in the
    structuring of time is not limited to linear acceleration. It reaches
    deep into the foundations of life and has even reconfigured biological
    processes themselves. Today there is an entire industry that specializes
    in freezing the stem []{#Page_92 type="pagebreak" title="92"}cells of
    newborns in liquid nitrogen -- that is, in suspending cellular
    biological time -- in case they might be needed later on in life for a
    transplant or for the creation of artificial organs. Children can be
    born even if their physical mothers are already dead. Or they can be
    "produced" from ova that have been stored for many years at minus 196
    degrees.[^58^](#c2-note-0058){#c2-note-0058a} At the same time,
    questions now have to be addressed every day whose grand temporal
    dimensions were once the matter of myth. In the case of atomic energy,
    for instance, there is the issue of permanent disposal. Where can we
    deposit nuclear waste for the next hundred thousand years without it
    causing catastrophic damage? How can the radioactive material even be
    transported there, wherever that is, within the framework of everday
    traffic laws?[^59^](#c2-note-0059){#c2-note-0059a}

    The construction of temporal dimensions and sequences has thus become an
    everyday cultural question. Whereas throughout Europe, for example,
    committees of experts and ethicists still meet to discuss reproductive
    medicine and offer their various recommendations, many couples are
    concerned with the specific question of whether or how they can fulfill
    their wish to have children. Without a coherent set of rules, questions
    such as these have to be answered by each individual with recourse to
    his or her personally relevant communal formation. If there is no
    cultural framework that at least claims to be binding for everyone, then
    the individual must negotiate independently within each communal
    formation with the goal of acquiring the resources necessary to act
    according to communal values and objectives.
    :::

    ::: {.section}
    ### Self-generating orders {#c2-sec-0014}

    These three functions -- selection, interpretation, and the constitutive
    ability to act -- make communal formations the true subject of the
    digital condition. In principle, these functions are nothing new;
    rather, they are typical of fields that are organized without reference
    to external or irrefutable authorities. The state of scholarship, for
    instance, is determined by what is circulated in refereed publications.
    In this case, "refereed" means that scientists at the same professional
    rank mutually evaluate each other\'s work. The scientific community (or
    better: the sub-community of a specialized discourse) []{#Page_93
    type="pagebreak" title="93"}evaluates the contributions of individual
    scholars. They decide what should be considered valuable, and this
    consensus can theoretically be revised at any time. It is based on a
    particular catalog of criteria, on an interpretive framework that
    provides lines of inquiry, methods, appraisals, and conventions of
    presentation. With every article, this framework is confirmed and
    reconstituted. If the framework changes, this can lead in the most
    extreme case to a paradigm shift, which overturns fundamental
    orientations, assumptions, and
    certainties.[^60^](#c2-note-0060){#c2-note-0060a} The result of this is
    not only a change in how scientific contributions are evaluated but also
    a change in how the external world is perceived and what activities are
    possible in it. Precisely because the sciences claim to define
    themselves, they have the ability to revise their own foundations.

    The sciences were the first large sphere of society to achieve
    comprehensive cultural autonomy; that is, the ability to determine its
    own binding meaning. Art was the second that began to organize itself on
    the basis of internal feedback. It was during the era of Romanticism
    that artists first laid claim to autonomy. They demanded "to absolve art
    from all conditions, to represent it as a realm -- indeed as the only
    realm -- in which truth and beauty are expressed in their pure form, a
    realm in which everything truly human is
    transcended."[^61^](#c2-note-0061){#c2-note-0061a} With the spread of
    photography in the second half of the nineteenth century, art also
    liberated itself from its final task, which was hoisted upon it from the
    outside, namely the need to represent external reality. Instead of
    having to represent the external world, artists could now focus on their
    own subjectivity. This gave rise to a radical individualism, which found
    its clearest summation in Marcel Duchamp\'s assertion that only the
    artist could determine what is art. This he claimed in 1917 by way of
    explaining how an industrially produced urinal, exhibited as a signed
    piece with the title "Fountain," could be considered a work of art.

    With the rise of the knowledge economy and the expansion of cultural
    fields, including the field of art and the artists active within it,
    this individualism quickly swelled to unmanageable levels. As a
    consequence, the task of defining what should be regarded as art shifted
    from the individual artist to the curator. It now fell upon the latter
    to select a few works from the surplus of competing scenes and thus
    bring temporary []{#Page_94 type="pagebreak" title="94"}order to the
    constantly diversifying and changing world of contemporary art. This
    order was then given expression in the form of exhibits, which were
    intended to be more than the sum of their parts. The beginning of this
    practice can be traced to the 1969 exhibition When Attitudes Become
    Form, which was curated by Harald Szeemann for the Kunsthalle Bern (it
    was also sponsored by Philip Morris). The works were not neatly
    separated from one another and presented without reference to their
    environment, but were connected with each other both spatially and in
    terms of their content. The effect of the exhibition could be felt at
    least as much through the collection of works as a whole as it could
    through the individual pieces, many of which had been specially
    commissioned for the exhibition itself. It not only cemented Szeemann\'s
    reputation as one of the most significant curators of the twentieth
    century; it also completely redefined the function of the curator as a
    central figure within the art system.

    This was more than 40 years ago and in a system that functioned
    differently from that of today. The distance from this exhibition, but
    also its ongoing relevance, was negotiated, significantly, in a
    re-enactment at the 2013 Biennale in Venice. For this, the old rooms at
    the Kunsthalle Bern were reconstructed in the space of the Fondazione
    Prada in such a way that both could be seen simultaneously. As is
    typical with such re-enactments, the curators of the project described
    its goals in terms of appropriation and distancing: "This was the
    challenge: how could we find and communicate a limit to a non-limit,
    creating a place that would reflect exactly the architectural structures
    of the Kunsthalle, but also an asymmetrical space with respect to our
    time and imbued with an energy and tension equivalent to that felt at
    Bern?"[^62^](#c2-note-0062){#c2-note-0062a}

    Curation -- that is, selecting works and associating them with one
    another -- has become an omnipresent practice in the art system. No
    exhibition takes place any more without a curator. Nevertheless,
    curators have lost their extraordinary
    position,[^63^](#c2-note-0063){#c2-note-0063a} with artists taking on
    more of this work themselves, not only because the boundaries between
    artistic and curatorial activities have become fluid but also because
    many artists explicitly co-produce the context of their work by
    incorporating a multitude of references into their pieces. It is with
    precisely this in mind that André Rottmann, in the []{#Page_95
    type="pagebreak" title="95"}quotation cited at the beginning of this
    chapter, can assert that referentiality has become the dominant
    production-aesthetic model in contemporary art. This practice enables
    artists to objectify themselves by explicitly placing themselves into a
    historical and social context. At the same time, it also enables them to
    subjectify the historical and social context by taking the liberty to
    select and arrange the references
    themselves.[^64^](#c2-note-0064){#c2-note-0064a}

    Such strategies are no longer specific to art. Self-generated spaces of
    reference and agency are now deeply embedded in everyday life. The
    reason for this is that a growing number of questions can no longer be
    answered in a generally binding way (such as those about what
    constitutes fine art), while the enormous expansion of the cultural
    requires explicit decisions to be made in more aspects of life. The
    reaction to this dilemma has been radical subjectivation. This has not,
    however, been taking place at the level of the individual but rather at
    that of communal formations. There is now a patchwork of answers to
    large questions and a multitude of reactions to large challenges, all of
    which are limited in terms of their reliability and scope.
    :::

    ::: {.section}
    ### Ambivalent voluntariness {#c2-sec-0015}

    Even though participation in new formations is voluntary and serves the
    interests of their members, it is not without preconditions. The most
    important of these is acceptance, the willing adoption of the
    interpretive framework that is generated by the communal formation. The
    latter is formed from the social, cultural, legal, and technical
    protocols that lend to each of these formations its concrete
    constitution and specific character. Protocols are common sets of rules;
    they establish, according to the network theorist Alexander Galloway,
    "the essential points necessary to enact an agreed-upon standard of
    action." They provide, he goes on, "etiquette for autonomous
    agents."[^65^](#c2-note-0065){#c2-note-0065a} Protocols are
    simul­taneously voluntary and binding; they allow actors to meet
    eye-to-eye instead of entering into hierarchical relations with one
    another. If everyone voluntarily complies with the protocols, then it is
    not necessary for one actor to give instructions to another. Whoever
    accepts the relevant protocols can interact with others who do the same;
    whoever opts not to []{#Page_96 type="pagebreak" title="96"}accept them
    will remain on the outside. Protocols establish, for example, common
    languages, technical standards, or social conventions. The fundamental
    protocol for the internet is the Transmission Control Protocol/Internet
    Protocol (TCP/IP). This suite of protocols defines the common language
    for exchanging data. Every device that exchanges information over the
    internet -- be it a smartphone, a supercomputer in a data center, or a
    networked thermostat -- has to use these protocols. In growing areas of
    social contexts, the common language is English. Whoever wishes to
    belong has to speak it increasingly often. In the natural sciences,
    communication now takes place almost exclusively in English. Non-native
    speakers who accept this norm may pay a high price: they have to learn a
    new language and continually improve their command of it or else resign
    themselves to being unable to articulate things as they would like --
    not to mention losing the possibility of expressing something for which
    another language would perhaps be more suitable, or forfeiting
    trad­itions that cannot be expressed in English. But those who refuse to
    go along with these norms pay an even higher price, risking
    self-marginalization. Those who "voluntarily" accept conventions gain
    access to a field of practice, even though within this field they may be
    structurally disadvantaged. But unwillingness to accept such
    conventions, with subsequent denial of access to this field, might have
    even greater disadvantages.[^66^](#c2-note-0066){#c2-note-0066a}

    In everyday life, the factors involved with this trade-off are often
    presented in the form of subtle cultural codes. For instance, in order
    to participate in a project devoted to the development of free software,
    it is not enough for someone to possess the necessary technical
    knowledge; he or she must also be able to fit into a wide-ranging
    informal culture with a characteristic style of expression, humor, and
    preferences. Ultimately, software developers do not form a professional
    corps in the traditional sense -- in which functionaries meet one
    another in the narrow and regulated domain of their profession -- but
    rather a communal formation in which the engagement of the whole person,
    both one\'s professional and social self, is scrutinized. The
    abolishment of the separ­ation between different spheres of life,
    requiring interaction of a more holistic nature, is in fact a key
    attraction of []{#Page_97 type="pagebreak" title="97"}these communal
    formations and is experienced by some as a genuine gain in freedom. In
    this situation, one is no longer subjected to rules imposed from above
    but rather one is allowed to -- and indeed ought to -- be authentically
    pursuing his or her own interests.

    But for others the experience can be quite the opposite because the
    informality of the communal formation also allows forms of exclusion and
    discrimination that are no longer acceptable in formally organized
    realms of society. Discrimination is more difficult to identify when it
    takes place within the framework of voluntary togetherness, for no one
    is forced to participate. If you feel uncomfortable or unwelcome, you
    are free to leave at any time. But this is a specious argument. The
    areas of free software or Wikipedia are difficult places for women. In
    these clubby atmospheres of informality, they are often faced with
    blatant sexism, and this is one of the reasons why many women choose to
    stay away from such projects.[^67^](#c2-note-0067){#c2-note-0067a} In
    2007, according to estimates by the American National Center for Women &
    Information Technology, whereas approximately 27 percent of all jobs
    related to computer science were held by women, their representation at
    the same time was far lower in the field of free software -- on average
    less than 2 percent. And for years, the proportion of women who edit
    texts on Wikipedia has hovered at around 10
    percent.[^68^](#c2-note-0068){#c2-note-0068a}

    The consequences of such widespread, informal, and elusive
    discrimination are not limited to the fact that certain values and
    prejudices of the shared culture are included in these products, while
    different viewpoints and areas of knowledge are
    excluded.[^69^](#c2-note-0069){#c2-note-0069a} What is more, those who
    are excluded or do not wish to expose themselves to discrimination (and
    thus do not even bother to participate in any communal formations) do
    not receive access to the resources that circulate there (attention and
    support, valuable and timely knowledge, or job offers). Many people are
    thus faced with the choice of either enduring the discrimination within
    a community or remaining on the outside and thus invisible. That this
    decision is made on a voluntary basis and on one\'s own responsibility
    hardly mitigates the coercive nature of the situation. There may be a
    choice, but it would be misleading to call it a free one.[]{#Page_98
    type="pagebreak" title="98"}
    :::

    ::: {.section}
    ### The power of sociability {#c2-sec-0016}

    In order to explain the peculiar coercive nature of the (nom­inally)
    voluntary acceptance of protocols, rules, and norms, the political
    scientist David Singh Grewal, drawing on the work of Max Weber and
    Michel Foucault, has distinguished between the "power of sovereignty"
    and the "power of sociabil­ity."[^70^](#c2-note-0070){#c2-note-0070a}
    The former develops on the basis of dominance and subordination, as
    imposed by authorities, police officers, judges, or other figures within
    formal hierarchies. Their power is anchored in disciplinary
    institutions, and the dictum of this sort of power is: "You must!" The
    power of sociability, on the contrary, functions by prescribing the
    conditions or protocols under which people are able to enter into an
    exchange with one another. The dictum of this sort of power is: "You
    can!" The more people accept certain protocols and standards, the more
    powerful these become. Accordingly, the sociability that they structure
    also becomes more comprehensive, and those not yet involved have to ask
    themselves all the more urgently whether they can afford not to accept
    these protocols and standards. Whereas the first type of power is
    ultimately based on the monopoly of violence and on repression, the
    second is founded on voluntary submission. When the entire internet
    speaks TCP/IP, then an individual\'s decision to use it may be voluntary
    in nominal terms, but at the same time it is an indispensable
    precondition for existing within the network at all. Protocols exert
    power without there having to be anyone present to possess the power in
    question. Whereas the sovereign can be located, the effects of
    sociability\'s power are diffuse and omnipresent. They are not
    repressive but rather constitutive. No one forces a scientist to publish
    in English or a woman editor to tolerate disparaging remarks on
    Wikipedia. People accept these often implicit behavioral norms (sexist
    comments are permitted, for instance) out of their own interests in
    order to acquire access to the resources circulating within the networks
    and to constitute themselves within it. In this regard, Singh
    distinguishes between the "intrinsic" and "extrinsic" reasons for
    abiding by certain protocols.[^71^](#c2-note-0071){#c2-note-0071a} In
    the first case, the motivation is based on a new protocol being better
    suited than existing protocols for carrying out []{#Page_99
    type="pagebreak" title="99"}a specific objective. People thus submit
    themselves to certain rules because they are especially efficient,
    transparent, or easy to use. In the second case, a protocol is accepted
    not because but in spite of its features. It is simply a precondition
    for gaining access to a space of agency in which resources and
    opportunities are available that cannot be found anywhere else. In the
    first case, it is possible to speak subjectively of voluntariness,
    whereas the second involves some experience of impersonal compunction.
    One is forced to do something that might potentially entail grave
    disadvantages in order to have access, at least, to another level of
    opportunities or to create other advantages for oneself.
    :::

    ::: {.section}
    ### Homogeneity, difference and authority {#c2-sec-0017}

    Protocols are present on more than a technical level; as interpretive
    frameworks, they structure viewpoints, rules, and patterns of behavior
    on all levels. Thus, they provide a degree of cultural homogeneity, a
    set of commonalities that lend these new formations their communal
    nature. Viewed from the outside, these formations therefore seem
    inclined toward consensus and uniformity, for their members have already
    accepted and internalized certain aspects in common -- the protocols
    that enable exchange itself -- whereas everyone on the outside has not
    done so. When everyone is speaking in English, the conversation sounds
    quite monotonous to someone who does not speak the language.

    Viewed from the inside, the experience is something different: in order
    to constitute oneself within a communal formation, not only does one
    have to accept its rules voluntarily and in a self-motivated manner; one
    also has to make contributions to the reproduction and development of
    the field. Everyone is urged to contribute something; that is, to
    produce, on the basis of commonalities, differences that simultaneously
    affirm, modify, and enhance these commonalities. This leads to a
    pronounced and occasionally highly competitive internal differentiation
    that can only be understood, however, by someone who has accepted the
    commonalities. To an outsider, this differentiation will seem
    irrelevant. Whoever is not well versed in the universe of *Star Wars*
    will not understand why the various character interpretations at
    []{#Page_100 type="pagebreak" title="100"}cosplay conventions, which I
    discussed above, might be brilliant or even controversial. To such a
    person, they will all seem equally boring and superficial.

    These formations structure themselves internally through the production
    of differences; that is, by constantly changing their common ground.
    Those who are able to add many novel aspects to the common resources
    gain a degree of authority. They assume central positions and they
    influence, through their behavior, the development of the field more
    than others do. However, their authority, influence, and de facto power
    are not based on any means of coercion. As Niklas Luhmann noted, "In the
    end, one participant\'s achievements in making selections \[...\] are
    accepted by another participant \[...\] as a limitation of the latter\'s
    potential experiences and activities without him having to make the
    selection on his own."[^72^](#c2-note-0072){#c2-note-0072a} Even this is
    a voluntary and self-interested act: the members of the formation
    recognize that this person has contributed more to the common field and
    to the resources within it. This, in turn, is to everyone\'s advantage,
    for each member would ultimately like to make use of the field\'s
    resources to achieve his or her own goals. This arrangement, which can
    certainly take on hierarchical qualities, is experienced as something
    meritocratically legitimized and voluntarily
    accepted.[^73^](#c2-note-0073){#c2-note-0073a} In the context of free
    software, there has therefore been some discussion of "benevolent
    dictators."[^74^](#c2-note-0074){#c2-note-0074a} The matter of
    "dictators" is raised because projects are often led by charismatic
    figures without a formal mandate. They are "benevolent" because their
    pos­ition of authority is based on the fact that a critical mass of
    participating producers has voluntarily subordinated itself for its own
    self-interest. If the consensus breaks over whose contributions have
    been carrying the most weight, then the formation will be at risk of
    losing its internal structure and splitting apart ("forking," in the
    jargon of free software).
    :::
    :::

    ::: {.section}
    Algorithmicity {#c2-sec-0018}
    --------------

    Through personal communication, referential processes in communal
    formations create cultural zones of various sizes and scopes. They
    expand into the empty spaces that have been created by the erosion of
    established institutions and []{#Page_101 type="pagebreak"
    title="101"}processes, and once these new processes have been
    established the process of erosion intensifies. Multiple processes of
    exchange take place alongside one another, creating a patchwork of
    interconnected, competing, or entirely unrelated spheres of meaning,
    each with specific goals and resources and its own preconditions and
    potentials. The structures of knowledge, order, and activity that are
    generated by this are holistic as well as partial and limited. The
    participants in such structures are simultaneously addressed on many
    levels that were once functionally separated; previously independent
    spheres, such as work and leisure, are now mixed together, but usually
    only with respect to the subdivisions of one\'s own life. And, at first,
    the structures established in this way are binding only for active
    participants.

    ::: {.section}
    ### Exiting the "Library of Babel" {#c2-sec-0019}

    For one person alone, however, these new processes would not be able to
    generate more than a local island of meaning from the enormous clamor of
    chaotic spheres of information. In his 1941 story "The Library of
    Babel," Jorge Luis Borges fashioned a fitting image for such a
    situation. He depicts the world as a library of unfathomable and
    possibly infinite magnitude. The characters in the story do not know
    whether there is a world outside of the library. There are reasons to
    believe that there is, and reasons that suggest otherwise. The library
    houses the complete collection of all possible books that can be written
    on exactly 410 pages. Contained in these volumes is the promise that
    there is "no personal or universal problem whose eloquent solution
    \[does\] not exist," for every possible combination of letters, and thus
    also every possible pronouncement, is recorded in one book or another.
    No catalog has yet been found for the library (though it must exist
    somewhere), and it is impossible to identify any order in its
    arrangement of books. The "men of the library," according to Borges,
    wander round in search of the one book that explains everything, but
    their actual discoveries are far more modest. Only once in a while are
    books found that contain more than haphazard combinations of signs. Even
    small regularities within excerpts of texts are heralded as sensational
    discoveries, and it is around these discoveries that competing
    []{#Page_102 type="pagebreak" title="102"}schools of interpretation
    develop. Despite much labor and effort, however, the knowledge gained is
    minimal and fragmentary, so the prevailing attitude in the library is
    bleak. By the time of the narrator\'s generation, "nobody expects to
    discover anything."[^75^](#c2-note-0075){#c2-note-0075a}

    Although this vision has now been achieved from a quantitative
    perspective -- no one can survey the "library" of digital information,
    which in practical terms is infinitely large, and all of the growth
    curves continue to climb steeply -- today\'s cultural reality is
    nevertheless entirely different from that described by Borges. Our
    ability to deal with massive amounts of data has radically improved, and
    thus our faith in the utility of information is not only unbroken but
    rather gaining strength. What is new is precisely such large quantities
    of data ("big data"), which, as we are promised or forewarned, will lead
    to new knowledge, to a comprehensive understanding of the world, indeed
    even to "omniscience."[^76^](#c2-note-0076){#c2-note-0076a} This faith
    in data is based above all on the fact that the two processes described
    above -- referentiality and communality -- are not the only new
    mechanisms for filtering, sorting, aggregating, and evaluating things.
    Beneath or ahead of the social mechanisms of decentralized and networked
    cultural production, there are algorithmic processes that pre-sort the
    immeasurably large volumes of data and convert them into a format that
    can be apprehended by individuals, evaluated by communities, and
    invested with meaning.

    Strictly speaking, it is impossible to maintain a categorical
    distinction between social processes that take place in and by means of
    technological infrastructures and technical pro­cesses that are socially
    constructed. In both cases, social actors attempt to realize their own
    interests with the resources at their disposal. The methods of
    (attempted) realization, the available resources, and the formulation of
    interests mutually influence one another. The technological resources
    are inscribed in the formulation of goals. These open up fields of
    imagination and desire, which in turn inspire technical
    development.[^77^](#c2-note-0077){#c2-note-0077a} Although it is
    impossible to draw clear theoretical lines, the attempt to make such a
    distinction can nevertheless be productive in practice, for in this way
    it is possible to gain different perspectives about the same object of
    investigation.[]{#Page_103 type="pagebreak" title="103"}
    :::

    ::: {.section}
    ### The rise of algorithms {#c2-sec-0020}

    An algorithm is a set of instructions for converting a given input into
    a desired output by means of a finite number of steps: algorithms are
    used to solve predefined problems. For a set of instructions to become
    an algorithm, it has to be determined in three different respects.
    First, the necessary steps -- individually and as a whole -- have to be
    described unambiguously and completely. To do this, it is usually
    neces­sary to use a formal language, such as mathematics, or a
    programming language, in order to avoid the characteristic imprecision
    and ambiguity of natural language and to ensure instructions can be
    followed without interpretation. Second, it must be possible in practice
    to execute the individual steps together. For this reason, every
    algorithm is tied to the context of its realization. If the context
    changes, so do the operating processes that can be formalized as
    algorithms and thus also the ways in which algorithms can partake in the
    constitution of the world. Third, it must be possible to execute an
    operating instruction mechanically so that, under fixed conditions, it
    always produces the same result.

    Defined in such general terms, it would also be possible to understand
    the instruction manual for a typical piece of Ikea furniture as an
    algorithm. It is a set of instructions for creating, with a finite
    number of steps, a specific and predefined piece of furniture (output)
    from a box full of individual components (input). The instructions are
    composed in a formal language, pictograms, which define each step as
    unambiguously as possible, and they can be executed by a single person
    with simple tools. The process can be repeated, for the final result is
    always the same: a Billy box will always yield a Billy shelf. In this
    case, a person takes over the role of a machine, which (unambiguous
    pictograms aside) can lead to problems, be it that scratches and other
    traces on the finished piece of furniture testify to the unique nature
    of the (unsuccessful) execution, or that, inspired by the micro-trend of
    "Ikea hacking," the official instructions are intentionally ignored.

    Because such imprecision is supposed to be avoided, the most important
    domain of algorithms in practice is mathematics and its implementation
    on the computer. The term []{#Page_104 type="pagebreak"
    title="104"}"algorithm" derives from the Persian mathematician,
    astronomer, and geographer Muḥammad ibn Mūsā al-Khwārizmī. His book *On
    the Calculation with Hindu Numerals*, which was written in Baghdad in
    825, was known widely in the Western Middle Ages through a Latin
    translation and made the essential contribution of introducing
    Indo-Arabic nu­merals and the number zero to Europe. The work begins
    with the formula *dixit algorizmi* ... ("Algorismi said ..."). During
    the Middle Ages, *algorizmi* or *algorithmi* soon became a general term
    for advanced methods of
    calculation.[^78^](#c2-note-0078){#c2-note-0078a}

    The modern effort to build machines that could mechanic­ally carry out
    instructions achieved its first breakthrough with Gottfried Wilhelm
    Leibniz. He has often been credited with making the following remark:
    "It is unworthy of excellent men to lose hours like slaves in the labour
    of calculation which could be done by any peasant with the aid of a
    machine."[^79^](#c2-note-0079){#c2-note-0079a} This vision already
    contains a distinction between higher cognitive and interpretive
    activities, which are regarded as being truly human, and lower processes
    that involve pure execution and can therefore be mechanized. To this
    end, Leibniz himself developed the first calculating machine, which
    could carry out all four of the basic types of arithmetic. He was not
    motivated to do this by the practical necessities of production and
    business (although conceptually groundbreaking, Leibniz\'s calculating
    machine remained, on account of its mechanical complexity, a unique item
    and was never used).[^80^](#c2-note-0080){#c2-note-0080a} In the
    estimation of the philosopher Sybille Krämer, calculating machines "were
    rather speculative masterpieces of a century that, like none before it,
    was infatuated by the idea of mechanizing 'intellectual'
    processes."[^81^](#c2-note-0081){#c2-note-0081a} Long before machines
    were implemented on a large scale to increase the efficiency of material
    production, Leibniz had already speculated about using them to enhance
    intellectual labor. And this vision has never since disappeared. Around
    a century and a half later, the English polymath Charles Babbage
    formulated it anew, now in direct connection with industrial
    mechanization and its imperative of time-saving
    efficiency.[^82^](#c2-note-0082){#c2-note-0082a} Yet he, too, failed to
    overcome the problem of practically realizing such a machine.

    The decisive step that turned the vision of calculating machines into
    reality was made by Alan Turing in 1937. With []{#Page_105
    type="pagebreak" title="105"}a theoretical model, he demonstrated that
    every algorithm could be executed by a machine as long as it could read
    an incremental set of signs, manipulate them according to established
    rules, and then write them out again. The validity of his model did not
    depend on whether the machine would be analog or digital, mechanical or
    electronic, for the rules of manipulation were not at first conceived as
    being a fixed component of the machine itself (that is, as being
    implemented in its hardware). The electronic and digital approach came
    to be preferred because it was hoped that even the instructions could be
    read by the machine itself, so that the machine would be able to execute
    not only one but (theoretically) every written algorithm. The
    Hungarian-born mathematician John von Neumann made it his goal to
    implement this idea. In 1945, he published a model in which the program
    (the algorithm) and the data (the input and output) were housed in a
    common storage device. Thus, both could be manipulated simultaneously
    without having to change the hardware. In this way, he converted the
    "Turing machine" into the "universal Turing machine"; that is, the
    modern computer.[^83^](#c2-note-0083){#c2-note-0083a}

    Gordon Moore, the co-founder of the chip manufacturer Intel,
    prognosticated 20 years later that the complexity of integrated circuits
    and thus the processing power of computer chips would double every 18 to
    24 months. Since the 1970s, his prediction has been known as Moore\'s
    Law and has essentially been correct. This technical development has
    indeed taken place exponentially, not least because the semi-conductor
    industry has been oriented around
    it.[^84^](#c2-note-0084){#c2-note-0084a} An IBM 360/40 mainframe
    computer, which was one of the first of its kind to be produced on a
    large scale, could make approximately 40,000 calculations per second and
    its cost, when it was introduced to the market in 1965, was \$1.5
    million per unit. Just 40 years later, a standard server (with a
    quad-core Intel processor) could make more than 40 billion calculations
    per second, and this at a price of little more than \$1,500. This
    amounts to an increase in performance by a factor of a million and a
    corresponding price reduction by a factor of a thousand; that is, an
    improvement in the price-to-performance ratio by a factor of a billion.
    With inflation taken into consideration, this factor would be even
    higher. No less dramatic were the increases in performance -- or rather
    []{#Page_106 type="pagebreak" title="106"}the price reductions -- in the
    area of data storage. In 1980, it cost more than \$400,000 to store a
    gigabyte of data, whereas 30 years later it would cost just 10 cents to
    do the same -- a price reduction by a factor of 4 million. And in both
    areas, this development has continued without pause.

    These increases in performance have formed the material basis for the
    rapidly growing number of activities carried out by means of algorithms.
    We have now reached a point where Leibniz\'s distinction between
    creative mental functions and "simple calculations" is becoming
    increasingly fuzzy. Recent discussions about the allegedly threatening
    "domination of the computer" have been kindled less by the increased use
    of algorithms as such than by the gradual blurring of this distinction
    with new possibilities to formalize and mechanize increasing areas of
    creative thinking.[^85^](#c2-note-0085){#c2-note-0085a} Activities that
    not long ago were reserved for human intelligence, such as composing
    texts or analyzing the content of images, are now frequently done by
    machines. As early as 2010, a program called Stats Monkey was introduced
    to produce short reports about baseball games. All that the program
    needs for this is comprehensive data about the games, which can be
    accumulated mechanically and which have since become more detailed due
    to improved image recognition and sensors. From these data, the program
    extracts the decisive moments and players of a game, recognizes
    characteristic patterns throughout the course of play (such as
    "extending an early lead," "a dramatic comeback," etc.), and on this
    basis generates its own report. Regarding the reports themselves, a
    number of variables can be determined in advance, for instance whether
    the story should be written from the perspective of a neutral observer
    or from the standpoint of one of the two teams. If writing about little
    league games, the program can be instructed to ignore the errors made by
    children -- because no parent wants to read about those -- and simply
    focus on their heroics. The algorithm was soon patented, and a start-up
    business was created from the original interdisciplinary research
    project: Narrative Science. In addition to sport reports it now offers
    texts of all sorts, but above all financial reports -- another field for
    which there is a great deal of available data. These texts have been
    published by reputable media outlets such as the business magazine
    *Forbes*, in which their authorship []{#Page_107 type="pagebreak"
    title="107"}is credited to "Narrative Science." Although these
    contributions are still limited to relatively simple topics, this will
    not remain the case for long. When asked about the percentage of news
    that would be written by computers 15 years from now, Narrative
    Science\'s chief technology officer and co-founder Kristian Hammond
    confidently predicted "\[m\]ore than 90 percent." He added that, within
    the next five years, an algorithm could even win a Pulitzer
    Prize.[^86^](#c2-note-0086){#c2-note-0086a} This may be blatant hype and
    self-promotion but, as a general estimation, Hammond\'s assertion is not
    entirely beyond belief. It remains to be seen whether algorithms will
    replace or simply supplement traditional journalism. Yet because media
    companies are now under strong financial pressure, it is certainly
    reasonable to predict that many journalistic texts will be automated in
    the future. Entirely different applications, however, have also been
    conceived. Alexander Pschera, for instance, foresees a new age in the
    relationship between humans and nature, for, as soon as animals are
    equipped with transmitters and sensors and are thus able to tell their
    own stories through the appropriate software, they will be regarded as
    individuals and not merely as generic members of a
    species.[^87^](#c2-note-0087){#c2-note-0087a}

    We have not yet reached this point. However, given that the CIA has also
    expressed interest in Narrative Science and has invested in it through
    its venture-capital firm In-Q-Tel, there are indications that
    applications are being developed beyond the field of journalism. For the
    purpose of spreading propaganda, for instance, algorithms can easily be
    used to create a flood of entries on online forums and social mass
    media.[^88^](#c2-note-0088){#c2-note-0088a} Narrative Science is only
    one of many companies offering automated text analysis and production.
    As implemented by IBM and other firms, so-called E-discovery software
    promises to reduce dramatically the amount of time and effort required
    to analyze the constantly growing numbers of files that are relevant to
    complex legal cases. Without such software, it would be impossible in
    practice for lawyers to deal with so many documents. Numerous bots
    (automated editing programs) are active in the production of Wikipedia
    as well. Whereas, in the German edition, bots are forbidden from writing
    their own articles, this is not the case in the Swedish version.
    Measured by the number of entries, the latter is now the second-largest
    edition of the online encyclopedia in the []{#Page_108 type="pagebreak"
    title="108"}world, for, in the summer of 2013, a single bot contributed
    more than 200,000 articles to it.[^89^](#c2-note-0089){#c2-note-0089a}
    Since 2013, moreover, the company Epagogix has offered software that
    uses histor­ical data to evaluate the market potential of film scripts.
    At least one major Hollywood studio uses this software behind the backs
    of scriptwriters and directors, for, according to the company\'s CEO,
    the latter would be "nervous" to learn that their creative work was
    being analyzed in such a way.[^90^](#c2-note-0090){#c2-note-0090a}
    Think, too, of the typical statement that is made at the beginning of a
    call to a telephone hotline -- "This call may be recorded for training
    purposes." Increasingly, this training is not intended for the employees
    of the call center but rather for algorithms. The latter are expected to
    learn how to recognize the personality type of the caller and, on that
    basis, to produce an appropriate script to be read by its poorly
    educated and part-time human
    co-workers.[^91^](#c2-note-0091){#c2-note-0091a} Another example is the
    use of algorithms to grade student
    essays,[^92^](#c2-note-0092){#c2-note-0092a} or ... But there is no need
    to expand this list any further. Even without additional references to
    comparable developments in the fields of image, sound, language, and
    film analysis, it is clear by now that, on many fronts, the borders
    between the creative and the mechanical have
    shifted.[^93^](#c2-note-0093){#c2-note-0093a}
    :::

    ::: {.section}
    ### Dynamic algorithms {#c2-sec-0021}

    The algorithms used for such tasks, however, are no longer simple
    sequences of static instructions. They are no longer repeated unchanged,
    over and over again, but are dynamic and adaptive to a high degree. The
    computing power available today is used to write programs that modify
    and improve themselves semi-automatically and in response to feedback.

    What this means can be illustrated by the example of evolutionary and
    self-learning algorithms. An evolutionary algorithm is developed in an
    iterative process that continues to run until the desired result has
    been achieved. In most cases, the values of the variables of the first
    generation of algorithms are chosen at random in order to diminish the
    influence of the programmer\'s presuppositions on the results. These
    cannot be avoided entirely, however, because the type of variables
    (independent of their value) has to be determined in the first place. I
    will return to this problem later on. This is []{#Page_109
    type="pagebreak" title="109"}followed by a phase of evaluation: the
    output of every tested algorithm is evaluated according to how close it
    is to the desired solution. The best are then chosen and combined with
    one another. In addition, mutations (that is, random changes) are
    introduced. These steps are then repeated as often as necessary until,
    according to the specifications in question, the algorithm is
    "sufficient" or cannot be improved any further. By means of intensive
    computational processes, algorithms are thus "cultivated"; that is,
    large numbers of these are tested instead of a single one being designed
    analytically and then implemented. At the heart of this pursuit is a
    functional solution that proves itself experimentally and in practice,
    but about which it might no longer be possible to know why it functions
    or whether it actually is the best possible solution. The fundamental
    methods behind this process largely derive from the 1970s (the first
    stage of artificial intelligence), the difference being that today they
    can be carried out far more effectively. One of the best-known examples
    of an evolutionary algorithm is that of Google Flu Trends. In order to
    predict which regions will be especially struck by the flu in a given
    year, it evaluates the geographic distribution of internet searches for
    particular terms ("cold remedies," for instance). To develop the
    program, Google tested 450 million different models until one emerged
    that could reliably identify local flu epidemics one to two weeks ahead
    of the national health authorities.[^94^](#c2-note-0094){#c2-note-0094a}

    In pursuits of this magnitude, the necessary processes can only be
    administered by computer programs. The series of tests are no longer
    conducted by programmers but rather by algorithms. In short, algorithms
    are implemented in order to write new algorithms or determine their
    variables. If this reflexive process, in turn, is built into an
    algorithm, then the latter becomes "self-learning": the programmers do
    not set the rules for its execution but rather the rules according to
    which the algorithm is supposed to know how to accomplish a particular
    goal. In many cases, the solution strategies are so complex that they
    are incomprehensible in retrospect. They can no longer be tested
    logically, only experimentally. Such algorithms are essentially black
    boxes -- objects that can only be understood by their outer behavior but
    whose internal structure cannot be known.[]{#Page_110 type="pagebreak"
    title="110"}

    Automatic facial recognition, as used in surveillance technologies and
    for authorizing access to certain things, is based on the fact that
    computers can evaluate large numbers of facial images, first to produce
    a general model for a face, then to identify the variables that make a
    face unique and therefore recognizable. With so-called "unsupervised" or
    "deep-learning" algorithms, some developers and companies have even
    taken this a step further: computers are expected to extract faces from
    unstructured images -- that is, from volumes of images that contain
    images both with faces and without them -- and to do so without
    possessing in advance any model of the face in question. So far, the
    extraction and evaluation of unknown patterns from unstructured material
    has only been achieved in the case of very simple patterns -- with edges
    or surfaces in images, for instance -- for it is extremely complex and
    computationally intensive to program such learning processes. In recent
    years, however, there have been enormous leaps in available computing
    power, and both the data inputs and the complexity of the learning
    models have increased exponentially. Today, on the basis of simple
    patterns, algorithms are developing improved recognition of the complex
    content of images. They are refining themselves on their own. The term
    "deep learning" is meant to denote this very complexity. In 2012, Google
    was able to demonstrate the performance capacity of its new programs in
    an impressive manner: from a collection of randomly chosen YouTube
    videos, analyzed in a cluster by 1,000 computers with 16,000 processors,
    it was possible to create a model in just three days that increased
    facial recognition in unstructured images by 70
    percent.[^95^](#c2-note-0095){#c2-note-0095a} Of course, the algorithm
    does not "know" what a face is, but it reliably recognizes a class of
    forms that humans refer to as a face. One advantage of a model that is
    not created on the basis of prescribed parameters is that it can also
    identify faces in non-standard situ­ations (for instance if a person is
    in the background, if a face is half-concealed, or if it has been
    recorded at a sharp angle). Thanks to this technique, it is possible to
    search the content of images directly and not, as before, primarily by
    searching their descriptions. Such algorithms are also being used to
    identify people in images and to connect them in social networks with
    the profiles of the people in question, and this []{#Page_111
    type="pagebreak" title="111"}without any cooperation from the users
    themselves. Such algorithms are also expected to assist in directly
    controlling activity in "unstructured" reality, for instance in
    self-driving cars or other autonomous mobile applications that are of
    great interest to the military in particular.

    Algorithms of this sort can react and adjust themselves directly to
    changes in the environment. This feedback, however, also shortens the
    timeframe within which they are able to generate repetitive and
    therefore predictable results. Thus, algorithms and their predictive
    powers can themselves become unpredictable. Stock markets have
    frequently experi­enced so-called "sub-second extreme events"; that is,
    price fluctuations that happen in less than a
    second.[^96^](#c2-note-0096){#c2-note-0096a} Dramatic "flash crashes,"
    however, such as that which occurred on May 6, 2010, when the Dow Jones
    Index dropped almost a thousand points in a few minutes (and was thus
    perceptible to humans), have not been terribly
    uncommon.[^97^](#c2-note-0097){#c2-note-0097a} With the introduction of
    voice commands on mobile phones (Apple\'s Siri, for example, which came
    out in 2011), programs based on self-learning algorithms have now
    reached the public at large and have infiltrated increased areas of
    everyday life.
    :::

    ::: {.section}
    ### Sorting, ordering, extracting {#c2-sec-0022}

    Orders generated by algorithms are a constitutive element of the digital
    condition. On the one hand, the mechanical pre-sorting of the
    (informational) world is a precondition for managing immense and
    unstructured amounts of data. On the other hand, these large amounts of
    data and the computing centers in which they are stored and processed
    provide the material precondition for developing increasingly complex
    algorithms. Necessities and possibilities are mutually motivating one
    another.[^98^](#c2-note-0098){#c2-note-0098a}

    Perhaps the best-known algorithms that sort the digital infosphere and
    make it usable in its present form are those of search engines, above
    all Google\'s PageRank. Thanks to these, we can find our way around in a
    world of unstructured information and transfer increasingly larger parts
    of the (informational) world into the order of unstructuredness without
    giving rise to the "Library of Babel." Here, "unstructured" means that
    there is no prescribed order such as (to stick []{#Page_112
    type="pagebreak" title="112"}with the image of the library) a cataloging
    system that assigns to each book a specific place on a shelf. Rather,
    the books are spread all over the place and are dynamically arranged,
    each according to a search, so that the appropriate books for each
    visitor are always standing ready at the entrance. Yet the metaphor of
    books being strewn all about is problematic, for "unstructuredness" does
    not simply mean the absence of any structure but rather the presence of
    another type of order -- a meta-structure, a potential for order -- out
    of which innumerable specific arrangements can be generated on an ad hoc
    basis. This meta-structure is created by algorithms. They subsequently
    derive from it an actual order, which the user encounters, for instance,
    when he or she scrolls through a list of hits produced by a search
    engine. What the user does not see are the complex preconditions for
    assembling the search results. By the middle of 2014, according to the
    company\'s own information, the Google index alone included more than a
    hundred million gigabytes of data.

    Originally (that is, in the second half of the 1990s), Page­Rank
    functioned in such a way that the algorithm analyzed the structure of
    links on the World Wide Web, first by noting the number of links that
    referred to a given document, and second by evaluating the "relevance"
    of the site that linked to the document in question. The relevance of a
    site, in turn, was determined by the number of links that led to it.
    From these two variables, every document registered by the search engine
    was assigned a value, the PageRank. The latter served to present the
    documents found with a given search term as a hierarchical list (search
    results), whereby the document with the highest value was listed
    first.[^99^](#c2-note-0099){#c2-note-0099a} This algorithm was extremely
    successful because it reduced the unfathomable chaos of the World Wide
    Web to a task that could be managed without difficulty by an individual
    user: inputting a search term and selecting from one of the presented
    "hits." The simplicity of the user\'s final choice, together with the
    quality of the algorithmic pre-selection, quickly pushed Google past its
    competition.

    Underlying this process is the assumption that every link is an
    indication of relevance, and that links from frequently linked (that is,
    popular) sources are more important than those from less frequently
    linked (that is, unpopular) sources. []{#Page_113 type="pagebreak"
    title="113"}The advantage of this assumption is that it can be
    understood in terms of purely quantitative variables and it is not
    necessary to have any direct understanding of a document\'s content or
    of the context in which it exists.

    In the middle of the 1990s, when the first version of the PageRank
    algorithm was developed, the problem of judging the relevance of
    documents whose content could only partially be evaluated was not a new
    one. Science administrators at universities and funding agencies had
    been facing this difficulty since the 1950s. During the rise of the
    knowledge economy, the number of scientific publications increased
    rapidly. Scientific fields, perspectives, and methods also multiplied
    and diversified during this time, so that even experts could not survey
    all of the work being done in their own areas of
    research.[^100^](#c2-note-0100){#c2-note-0100a} Thus, instead of reading
    and evaluating the content of countless new publications, they shifted
    their analysis to a higher level of abstraction. They began to count how
    often an article or book was cited and applied this information to
    assess the value of a given author or
    publication.[^101^](#c2-note-0101){#c2-note-0101a} The underlying
    assumption was (and remains) that only important things are referenced,
    and therefore every citation and every reference can be regarded as an
    indirect vote for something\'s relevance.

    In both cases -- classifying a chaotic sphere of information and
    administering an expanding industry of knowledge -- the challenge is to
    develop dynamic orders for rapidly changing fields, enabling the
    evaluation of the importance of individual documents without knowledge
    of their content. Because the analysis of citations or links operates on
    a purely quantitative basis, large amounts of data can be quickly
    structured with them, and especially relevant positions can be
    determined. The second advantage of this approach is that it does not
    require any assumptions about the contours of different fields or their
    relationships to one another. This enables the organ­ization of
    disordered or dynamic content. In both cases, references made by the
    actors themselves are used: citations in a scientific text, links on
    websites. Their value for establishing the order of a field as a whole,
    however, is only visible in the aggregate, for instance in the frequency
    with which a given article is
    cited.[^102^](#c2-note-0102){#c2-note-0102a} In both cases, the shift
    from analyzing "data" (the content of documents in the traditional
    sense) to []{#Page_114 type="pagebreak" title="114"}analyzing
    "meta-data" (describing documents in light of their relationships to one
    another) is a precondition for being able to make any use at all of
    growing amounts of information.[^103^](#c2-note-0103){#c2-note-0103a}
    This shift introduced a new level of abstraction. Information is no
    longer understood as a representation of external reality; its
    significance is not evaluated with regard to the relation between
    "information" and "the world," for instance with a qualitative criterion
    such as "true"/"false." Rather, the sphere of information is treated as
    a self-referential, closed world, and documents are accordingly only
    evaluated in terms of their position within this world, though with
    quantitative criteria such as "central"/"peripheral."

    Even though the PageRank algorithm was highly effective and assisted
    Google\'s rapid ascent to a market-leading position, at the beginning it
    was still relatively simple and its mode of operation was at least
    partially transparent. It followed the classical statistical model of an
    algorithm. A document or site referred to by many links was considered
    more important than one to which fewer links
    referred.[^104^](#c2-note-0104){#c2-note-0104a} The algorithm analyzed
    the given structural order of information and determined the position of
    every document therein, and this was largely done independently of the
    context of the search and without making any assumptions about it. This
    approach functioned relatively well as long as the volume of information
    did not exceed a certain size, and as long as the users and their
    searches were somewhat similar to one another. In both respects, this is
    no longer the case. The amount of information to be pre-sorted is
    increasing, and users are searching in all possible situations and
    places for everything under the sun. At the time Google was founded, no
    one would have thought to check the internet, quickly and while on
    one\'s way, for today\'s menu at the restaurant round the corner. Now,
    thanks to smartphones, this is an obvious thing to do.
    :::

    ::: {.section}
    ### Algorithm clouds {#c2-sec-0023}

    In order to react to such changes in user behavior -- and simultaneously
    to advance it further -- Google\'s search algorithm is constantly being
    modified. It has become increasingly complex and has assimilated a
    greater amount of contextual []{#Page_115 type="pagebreak"
    title="115"}information, which influences the value of a site within
    Page­Rank and thus the order of search results. The algorithm is no
    longer a fixed object or unchanging recipe but is transforming into a
    dynamic process, an opaque cloud composed of multiple interacting
    algorithms that are continuously refined (between 500 and 600 times a
    year, according to some estimates). These ongoing developments are so
    extensive that, since 2003, several new versions of the algorithm cloud
    have appeared each year with their own names. In 2014 alone, Google
    carried out 13 large updates, more than ever
    before.[^105^](#c2-note-0105){#c2-note-0105a}

    These changes continue to bring about new levels of abstraction, so that
    the algorithm takes into account add­itional variables such as the time
    and place of a search, alongside a person\'s previously recorded
    behavior -- but also his or her involvement in social environments, and
    much more. Personalization and contextualization were made part of
    Google\'s search algorithm in 2005. At first it was possible to choose
    whether or not to use these. Since 2009, however, they have been a fixed
    and binding component for everyone who conducts a search through
    Google.[^106^](#c2-note-0106){#c2-note-0106a} By the middle of 2013, the
    search algorithm had grown to include at least 200
    variables.[^107^](#c2-note-0107){#c2-note-0107a} What is relevant is
    that the algorithm no longer determines the position of a document
    within a dynamic informational world that exists for everyone
    externally. Instead, it now assigns a rank to their content within a
    dynamic and singular universe of information that is tailored to every
    individual user. For every person, an entirely different order is
    created instead of just an excerpt from a previously existing order. The
    world is no longer being represented; it is generated uniquely for every
    user and then presented. Google is not the only company that has gone
    down this path. Orders produced by algorithms have become increasingly
    oriented toward creating, for each user, his or her own singular world.
    Facebook, dating services, and other social mass media have been
    pursuing this approach even more radically than Google.
    :::

    ::: {.section}
    ### From the data shadow to the synthetic profile {#c2-sec-0024}

    This form of generating the world requires not only detailed information
    about the external world (that is, the reality []{#Page_116
    type="pagebreak" title="116"}shared by everyone) but also information
    about every individual\'s own relation to the
    latter.[^108^](#c2-note-0108){#c2-note-0108a} To this end, profiles are
    established for every user, and the more extensive they are, the better
    they are for the algorithms. A profile created by Google, for instance,
    identifies the user on three levels: as a "knowledgeable person" who is
    informed about the world (this is established, for example, by recording
    a person\'s searches, browsing behavior, etc.), as a "physical person"
    who is located and mobile in the world (a component established, for
    example, by tracking someone\'s location through a smartphone, sensors
    in a smart home, or body signals), and as a "social person" who
    interacts with other people (a facet that can be determined, for
    instance, by following someone\'s activity on social mass
    media).[^109^](#c2-note-0109){#c2-note-0109a}

    Unlike the situation in the 1990s, however, these profiles are no longer
    simply representations of singular people -- they are not "digital
    personas" or "data shadows." They no longer represent what is
    conventionally referred to as "individuality," in the sense of a
    spatially and temporally uniform identity. On the one hand, profiles
    rather consist of sub-individual elements -- of fragments of recorded
    behavior that can be evaluated on the basis of a particular search
    without promising to represent a person as a whole -- and they consist,
    on the other hand, of clusters of multiple people, so that the person
    being modeled can simultaneously occupy different positions in time.
    This temporal differentiation enables predictions of the following sort
    to be made: a person who has already done *x* will, with a probability
    of *y*, go on to engage in activity *z*. It is in this way that Amazon
    assembles its book recommendations, for the company knows that, within
    the cluster of people that constitutes part of every person\'s profile,
    a certain percentage of them have already gone through this sequence of
    activity. Or, as the data-mining company Science Rockstars (!) once
    pointedly expressed on its website, "Your next activity is a function of
    the behavior of others and your own past."

    Google and other providers of algorithmically generated orders have been
    devoting increased resources to the prognostic capabilities of their
    programs in order to make the confusing and potentially time-consuming
    step of the search obsolete. The goal is to minimize a rift that comes
    to light []{#Page_117 type="pagebreak" title="117"}in the act of
    searching, namely that between the world as everyone experiences it --
    plagued by uncertainty, for searching implies "not knowing something" --
    and the world of algorithmically generated order, in which certainty
    prevails, for everything has been well arranged in advance. Ideally,
    questions should be answered before they are asked. The first attempt by
    Google to eliminate this rift is called Google Now, and its slogan is
    "The right information at just the right time." The program, which was
    originally developed as an app but has since been made available on
    Chrome, Google\'s own web browser, attempts to anticipate, on the basis
    of existing data, a user\'s next step, and to provide the necessary
    information before it is searched for in order that such steps take
    place efficiently. Thus, for instance, it draws upon information from a
    user\'s calendar in order to figure out where he or she will have to go
    next. On the basis of real-time traffic data, it will then suggest the
    optimal way to get there. For those driving cars, the amount of traffic
    on the road will be part of the equation. This is ascertained by
    analyzing the motion profiles of other drivers, which will allow the
    program to determine whether the traffic is flowing or stuck in a jam.
    If enough historical data is taken into account, the hope is that it
    will be possible to redirect cars in such a way that traffic jams should
    no longer occur.[^110^](#c2-note-0110){#c2-note-0110a} For those who use
    public transport, Google Now evaluates real-time data about the
    locations of various transport services. With this information, it will
    suggest the optimal route and, depending on the calculated travel time,
    it will send a reminder (sometimes earlier, sometimes later) when it is
    time to go. That which Google is just experimenting with and testing in
    a limited and unambiguous context is already part of Facebook\'s
    everyday operations. With its EdgeRank algorithm, Facebook already
    organizes everyone\'s newsfeed, entirely in the background and without
    any explicit user interaction. On the basis of three variables -- user
    affinity (previous interactions between two users), content weight (the
    rate of interaction between all users and a specific piece of content),
    and currency (the age of a post) -- the algorithm selects content from
    the status updates made by one\'s friends to be displayed on one\'s own
    page.[^111^](#c2-note-0111){#c2-note-0111a} In this way, Facebook
    ensures that the stream of updates remains easy to scroll through, while
    also -- it is safe []{#Page_118 type="pagebreak" title="118"}to assume
    -- leaving enough room for advertising. This potential for manipulation,
    which algorithms possess as they work away in the background, will be
    the topic of my next section.
    :::

    ::: {.section}
    ### Variables and correlations {#c2-sec-0025}

    Every complex algorithm contains a multitude of variables and usually an
    even greater number of ways to make connections between them. Every
    variable and every relation, even if they are expressed in technical or
    mathematical terms, codifies assumptions that express a specific
    position in the world. There can be no purely descriptive variables,
    just as there can be no such thing as "raw
    data."[^112^](#c2-note-0112){#c2-note-0112a} Both -- data and variables
    -- are always already "cooked"; that is, they are engendered through
    cultural operations and formed within cultural
    categories.[^113^](#c2-note-0113){#c2-note-0113a} With every use of
    produced data and with every execution of an algorithm, the assumptions
    embedded in them are activated, and the positions contained within them
    have effects on the world that the algorithm generates and presents.

    As already mentioned, the early version of the PageRank algorithm was
    essentially based on the rather simple assumption that frequently linked
    content is more relevant than content that is only seldom linked to, and
    that links to sites that are themselves frequently linked to should be
    given more weight than those found on sites with fewer links to them.
    Replacing the qualitative criterion of "relevance" with the quantitative
    criterion of "popularity" not only proved to be tremendously practical
    but also extremely consequential, for search engines not only describe
    the world; they create it as well. That which search engines put at the
    top of this list is not just already popular but will remain so. A third
    of all users click on the first search result, and around 95 percent do
    not look past the first 10.[^114^](#c2-note-0114){#c2-note-0114a} Even
    the earliest version of the PageRank algorithm did not represent
    existing reality but rather (co-)constituted it.

    Popularity, however, is not the only element with which algorithms
    actively give shape to the user\'s world. A search engine can only sort,
    weigh, and make available that portion of information which has already
    been incorporated into its index. Everything else remains invisible. The
    relation between []{#Page_119 type="pagebreak" title="119"}the recorded
    part of the internet (the "surface web") and the unrecorded part (the
    "deep web") is difficult to determine. Estimates have varied between
    ratios of 1:5 and 1:500.[^115^](#c2-note-0115){#c2-note-0115a} There are
    many reasons why content might be inaccessible to search engines.
    Perhaps the information has been saved in formats that search engines
    cannot read or can only poorly read, or perhaps it has been hidden
    behind proprietary barriers such as paywalls. In order to expand the
    realm of things that can be exploited by their algorithms, the operators
    of search engines offer extensive guidance about how providers should
    design their sites so that search tools can find them in an optimal
    manner. It is not necessary to follow this guidance, but given the
    central role of search engines in sorting and filtering information, it
    is clear that they exercise a great deal of power by setting the
    standards.[^116^](#c2-note-0116){#c2-note-0116a}

    That the individual must "voluntarily" submit to this authority is
    typical of the power of networks, which do not give instructions but
    rather constitute preconditions. Yet it is in the interest of (almost)
    every producer of information to optimize its position in a search
    engine\'s index, and thus there is a strong incentive to accept the
    preconditions in question. Considering, moreover, the nearly
    monopolistic character of many providers of algorithmically generated
    orders and the high price that one would have to pay if one\'s own site
    were barely (or not at all) visible to others, the term "voluntary"
    begins to take on a rather foul taste. This is a more or less subtle way
    of pre-formatting the world so that it can be optimally recorded by
    algorithms.[^117^](#c2-note-0117){#c2-note-0117a}

    The providers of search engines usually justify such methods in the name
    of offering "more efficient" services and "more relevant" results.
    Ostensibly technical and neutral terms such as "efficiency" and
    "relevance" do little, however, to conceal the political nature of
    defining variables. Efficient with respect to what? Relevant for whom?
    These are issues that are decided without much discussion by the
    developers and institutions that regard the algorithms as their own
    property. Every now and again such questions incite public debates,
    mostly when the interests of one provider happen to collide with those
    of its competition. Thus, for instance, the initiative known as
    FairSearch has argued that Google abuses its market power as a search
    engine to privilege its []{#Page_120 type="pagebreak" title="120"}own
    content and thus to showcase it prominently in search
    results.[^118^](#c2-note-0118){#c2-note-0118a} FairSearch\'s
    representatives alleged, for example, that Google favors its own map
    service in the case of address searches and its own price comparison
    service in the case of product searches. The argument had an effect. In
    November of 2010, the European Commission initiated an antitrust
    investigation against Google. In 2014, a settlement was proposed that
    would have required the American internet giant to pay certain
    concessions, but the members of the Commission, the EU Parliament, and
    consumer protection agencies were not satisfied with the agreement. In
    April 2015, the anti-trust proceedings were recommenced by a newly
    appointed Commission, its reasoning being that "Google does not apply to
    its own comparison shopping service the system of penalties which it
    applies to other comparison shopping services on the basis of defined
    parameters, and which can lead to the lowering of the rank in which they
    appear in Google\'s general search results
    pages."[^119^](#c2-note-0119){#c2-note-0119a} In other words, the
    Commission accused the company of manipulating search results to its own
    advantage and the disadvantage of users.

    This is not the only instance in which the political side of search
    algorithms has come under public scrutiny. In the summer of 2012, Google
    announced that sites with higher numbers of copyright removal notices
    would henceforth appear lower in its
    rankings.[^120^](#c2-note-0120){#c2-note-0120a} The company thus
    introduced explicitly political and economic criteria in order to
    influence what, according to the standards of certain powerful players
    (such as film studios), users were able to
    view.[^121^](#c2-note-0121){#c2-note-0121a} In this case, too, it would
    be possible to speak of the personalization of searching, except that
    the heart of the situation was not the natural person of the user but
    rather the juridical person of the copyright holder. It was according to
    the latter\'s interests and preferences that searching was being
    reoriented. Amazon has employed similar tactics. In 2014, the online
    merchant changed its celebrated recommendation algorithm with the goal
    of reducing the presence of books released by irritating publishers that
    dared to enter into price negotiations with the
    company.[^122^](#c2-note-0122){#c2-note-0122a}

    Controversies over the methods of Amazon or Google, however, are the
    exception rather than the rule. Necessary (but never neutral) decisions
    about recording and evaluating data []{#Page_121 type="pagebreak"
    title="121"}with algorithms are being made almost all the time without
    any discussion whatsoever. The logic of the original Page­Rank algorithm
    was criticized as early as the year 2000 for essentially representing
    the commercial logic of mass media, systematically disadvantaging
    less-popular though perhaps otherwise relevant information, and thus
    undermining the "substantive vision of the web as an inclusive
    democratic space."[^123^](#c2-note-0123){#c2-note-0123a} The changes to
    the search algorithm that have been adopted since then may have modified
    this tendency, but they have certainly not weakened it. In addition to
    concentrating on what is popular, the new variables privilege recently
    uploaded and constantly updated content. The selection of search results
    is now contingent upon the location of the user, and it takes into
    account his or her social networking. It is oriented toward the average
    of a dynamically modeled group. In other words, Google\'s new algorithm
    favors that which is gaining popularity within a user\'s social network.
    The global village is thus becoming more and more
    provincial.[^124^](#c2-note-0124){#c2-note-0124a}
    :::

    ::: {.section}
    ### Data behaviorism {#c2-sec-0026}

    Algorithms such as Google\'s thus reiterate and reinforce a tendency
    that has already been apparent on both the level of individual users and
    that of communal formations: in order to deal with the vast amounts and
    complexity of information, they direct their gaze inward, which is not
    to say toward the inner being of individual people. As a level of
    reference, the individual person -- with an interior world and with
    ideas, dreams, and wishes -- is irrelevant. For algorithms, people are
    black boxes that can only be understood in terms of their reactions to
    stimuli. Consciousness, perception, and intention do not play any role
    for them. In this regard, the legal philosopher Antoinette Rouvroy has
    written about "data behaviorism."[^125^](#c2-note-0125){#c2-note-0125a}
    With this, she is referring to the gradual return of a long-discredited
    approach to behavioral psychology that postulated that human behavior
    could be explained, predicted, and controlled purely by our outwardly
    observable and measurable actions.[^126^](#c2-note-0126){#c2-note-0126a}
    Psychological dimensions were ignored (and are ignored in this new
    version of behaviorism) because it is difficult to observe them
    empiric­ally. Accordingly, this approach also did away with the need
    []{#Page_122 type="pagebreak" title="122"}to question people directly or
    take into account their subjective experiences, thoughts, and feelings.
    People were regarded (and are so again today) as unreliable, as poor
    judges of themselves, and as only partly honest when disclosing
    information. Any strictly empirical science, or so the thinking went,
    required its practitioners to disregard everything that did not result
    in physical and observable action. From this perspective, it was
    possible to break down even complex behavior into units of stimulus and
    reaction. This led to the conviction that someone observing another\'s
    activity always knows more than the latter does about himself or herself
    for, unlike the person being observed, whose impressions can be
    inaccurate, the observer is in command of objective and complete
    information. Even early on, this approach faced a wave of critique. It
    was held to be mechanistic, reductionist, and authoritarian because it
    privileged the observing scientist over the subject. In practice, it
    quickly ran into its own limitations: it was simply too expensive and
    complicated to gather data about human behavior.

    Yet that has changed radically in recent years. It is now possible to
    measure ever more activities, conditions, and contexts empirically.
    Algorithms like Google\'s or Amazon\'s form the technical backdrop for
    the revival of a mechanistic, reductionist, and authoritarian approach
    that has resurrected the long-lost dream of an objective view -- the
    view from nowhere.[^127^](#c2-note-0127){#c2-note-0127a} Every critique
    of this positivistic perspective -- that every measurement result, for
    instance, reflects not only the measured but also the measurer -- is
    brushed aside with reference to the sheer amounts of data that are now
    at our disposal.[^128^](#c2-note-0128){#c2-note-0128a} This attitude
    substantiates the claim of those in possession of these new and
    comprehensive powers of observation (which, in addition to Google and
    Facebook, also includes the intelligence services of Western nations),
    namely that they know more about individuals than individuals know about
    themselves, and are thus able to answer our questions before we ask
    them. As mentioned above, this is a goal that Google expressly hopes to
    achieve.

    At issue with this "inward turn" is thus the space of communal
    formations, which is constituted by the sum of all of the activities of
    their interacting participants. In this case, however, a communal
    formation is not consciously created []{#Page_123 type="pagebreak"
    title="123"}and maintained in a horizontal process, but rather
    synthetic­ally constructed as a computational function. Depending on the
    context and the need, individuals can either be assigned to this
    function or removed from it. All of this happens behind the user\'s back
    and in accordance with the goals and pos­itions that are relevant to the
    developers of a given algorithm, be it to optimize profit or
    surveillance, create social norms, improve services, or whatever else.
    The results generated in this way are sold to users as a personalized
    and efficient service that provides a quasi-magical product. Out of the
    enormous haystack of searchable information, results are generated that
    are made to seem like the very needle that we have been looking for. At
    best, it is only partially transparent how these results came about and
    which positions in the world are strengthened or weakened by them. Yet,
    as long as the needle is somewhat functional, most users are content,
    and the algorithm registers this contentedness to validate itself. In
    this dynamic world of unmanageable complexity, users are guided by a
    sort of radical, short-term pragmatism. They are happy to have the world
    pre-sorted for them in order to improve their activity in it. Regarding
    the matter of whether the information being provided represents the
    world accurately or not, they are unable to formulate an adequate
    assessment for themselves, for it is ultimately impossible to answer
    this question without certain resources. Outside of rapidly shrinking
    domains of specialized or everyday know­ledge, it is becoming
    increasingly difficult to gain an overview of the world without
    mechanisms that pre-sort it. Users are only able to evaluate search
    results pragmatically; that is, in light of whether or not they are
    helpful in solving a concrete problem. In this regard, it is not
    paramount that they find the best solution or the correct answer but
    rather one that is available and sufficient. This reality lends an
    enormous amount of influence to the institutions and processes that
    provide the solutions and answers.[]{#Page_124 type="pagebreak"
    title="124"}
    :::
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#c2-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#c2-note-0001a){#c2-note-0001}  André Rottmann, "Reflexive Systems
    of Reference: Approximations to 'Referentialism' in Contemporary Art,"
    trans. Gerrit Jackson, in Dirk Snauwaert et al. (eds), *Rehabilitation:
    The Legacy of the Modern Movement* (Ghent: MER, 2010), pp. 97--106, at
    99.

    [2](#c2-note-0002a){#c2-note-0002}  The recognizability of the sources
    distinguishes these processes from plagiarism. The latter operates with
    the complete opposite aim, namely that of borrowing sources without
    acknow­ledging them.

    [3](#c2-note-0003a){#c2-note-0003}  Ulf Poschardt, *DJ Culture* (London:
    Quartet Books, 1998), p. 34.

    [4](#c2-note-0004a){#c2-note-0004}  Theodor W. Adorno, *Aesthetic
    Theory*, trans. Robert Hullot-Kentor (Minneapolis, MN: University of
    Minnesota Press, 1997), p. 151.

    [5](#c2-note-0005a){#c2-note-0005}  Peter Bürger, *Theory of the
    Avant-Garde*, trans. Michael Shaw (Minneapolis, MN: University of
    Minnesota Press, 1984).

    [6](#c2-note-0006a){#c2-note-0006}  Felix Stalder, "Neun Thesen zur
    Remix-Kultur," *i-rights.info* (May 25, 2009), online.

    [7](#c2-note-0007a){#c2-note-0007}  Florian Cramer, *Exe.cut(up)able
    Statements: Poetische Kalküle und Phantasmen des selbstausführenden
    Texts* (Munich: Wilhelm Fink, 2011), pp. 9--10 \[--trans.\]

    [8](#c2-note-0008a){#c2-note-0008}  McLuhan stressed that, despite using
    the alphabet, every manuscript is unique because it not only depended on
    the sequence of letters but also on the individual ability of a given
    scribe to []{#Page_185 type="pagebreak" title="185"}lend these letters a
    particular shape. With the rise of the printing press, the alphabet shed
    these last elements of calligraphy and became typography.

    [9](#c2-note-0009a){#c2-note-0009}  Elisabeth L. Eisenstein, *The
    Printing Revolution in Early Modern Europe* (Cambridge: Cambridge
    University Press, 1983), p. 15.

    [10](#c2-note-0010a){#c2-note-0010}  Eisenstein, *The Printing
    Revolution in Early Modern Europe*, p. 204.

    [11](#c2-note-0011a){#c2-note-0011}  The fundamental aspects of these
    conventions were formulated as early as the beginning of the sixteenth
    century; see Michael Giesecke, *Der Buchdruck in der frühen Neuzeit:
    Eine historische Fallstudie über die Durchsetzung neuer Informations-
    und Kommunikationstechnologien* (Frankfurt am Main: Suhrkamp, 1991), pp.
    420--40.

    [12](#c2-note-0012a){#c2-note-0012}  Eisenstein, *The Printing
    Revolution in Early Modern Europe*, p. 49.

    [13](#c2-note-0013a){#c2-note-0013}  In April 2014, the Authors Guild --
    the association of American writers that had sued Google -- filed an
    appeal to overturn the decision and made a public statement demanding
    that a new organization be established to license the digital rights of
    out-of-print books. See "Authors Guild: Amazon was Google's Target,"
    *The Authors Guild: Industry & Advocacy News* (April 11, 2014), online.
    In October 2015, however, the next-highest authority -- the United
    States Court of Appeals for the Second Circuit -- likewise decided in
    Google\'s favor. The Authors Guild promptly announced its intention to
    take the case to the Supreme Court.

    [14](#c2-note-0014a){#c2-note-0014}  Jean-Noël Jeanneney, *Google and
    the Myth of Universal Knowledge: A View from Europe*, trans. Teresa
    Lavender Fagan (Chicago, IL: University of Chicago Press, 2007).

    [15](#c2-note-0015a){#c2-note-0015}  Within the framework of the Images
    for the Future project (2007--14), the Netherlands alone invested more
    than €170 million to digitize the collections of the most important
    audiovisual archives. Over 10 years, the cost of digitizing the entire
    cultural heritage of Europe has been estimated to be around €100
    billion. See Nick Poole, *The Cost of Digitising Europe\'s Cultural
    Heritage: A Report for the Comité des Sages of the European Commission*
    (November 2010), online.

    [16](#c2-note-0016a){#c2-note-0016}  Richard Darnton, "The National
    Digital Public Library Is Launched!", *New York Review of Books* (April
    25, 2013), online.

    [17](#c2-note-0017a){#c2-note-0017}  According to estimates by the
    British Library, so-called "orphan works" alone -- that is, works still
    legally protected but whose right holders are unknown -- make up around
    40 percent of the books in its collection that still fall under
    copyright law. In an effort to alleviate this problem, the European
    Parliament and the European Commission issued a directive []{#Page_186
    type="pagebreak" title="186"}in 2012 concerned with "certain permitted
    uses of orphan works." This has allowed libraries and archives to make
    works available online without permission if, "after carrying out
    diligent searches," the copyright holders cannot be found. What
    qualifies as a "diligent search," however, is so strictly formulated
    that the German Library Association has called the directive
    "impracticable." Deutscher Bibliotheksverband, "Rechtlinie über
    bestimmte zulässige Formen der Nutzung verwaister Werke" (February 27,
    2012), online.

    [18](#c2-note-0018a){#c2-note-0018}  UbuWeb, "Frequently Asked
    Questions," online.

    [19](#c2-note-0019a){#c2-note-0019}  The numbers in this area of
    activity are notoriously unreliable, and therefore only rough estimates
    are possible. It seems credible, however, that the Pirate Bay was
    attracting around a billion page views per month by the end of 2013.
    That would make it the seventy-fourth most popular internet destination.
    See Ernesto, "Top 10 Most Popular Torrent Sites of 2014" (January 4,
    2014), online.

    [20](#c2-note-0020a){#c2-note-0020}  See the documentary film *TPB AFK:
    The Pirate Bay Away from Keyboard* (2013), directed by Simon Klose.

    [21](#c2-note-0021a){#c2-note-0021}  In technical terms, there is hardly
    any difference between a "stream" and a "download." In both cases, a
    complete file is transferred to the user\'s computer and played.

    [22](#c2-note-0022a){#c2-note-0022}  The practice is legal in Germany
    but illegal in Austria, though digitized texts are routinely made
    available there in seminars. See Seyavash Amini Khanimani and Nikolaus
    Forgó, "Rechtsgutachten über die Erforderlichkeit einer freien
    Werknutzung im österreichischen Urheberrecht zur Privilegierung
    elektronisch unterstützter Lehre," *Forum Neue Medien Austria* (January
    2011), online.

    [23](#c2-note-0023a){#c2-note-0023}  Deutscher Bibliotheksverband,
    "Digitalisierung" (2015), online \[--trans\].

    [24](#c2-note-0024a){#c2-note-0024}  David Weinberger, *Everything Is
    Miscellaneous: The Power of the New Digital Disorder* (New York: Times
    Books, 2007).

    [25](#c2-note-0025a){#c2-note-0025}  This is not a question of material
    wealth. Those who are economically or socially marginalized are
    confronted with the same phenomenon. Their primary experience of this
    excess is with cheap goods and junk.

    [26](#c2-note-0026a){#c2-note-0026}  See Gregory Bateson, "Form,
    Substance and Difference," in Bateson, *Steps to an Ecology of Mind:
    Collected Essays in Anthropology, Psychiatry, Evolution and
    Epistemology* (London: Jason Aronson, 1972), pp. 455--71, at 460:
    "\[I\]n fact, what we mean by information -- the elementary unit of
    information -- is *a difference which makes a difference*" (the emphasis
    is original).

    [27](#c2-note-0027a){#c2-note-0027}  Inke Arns and Gabriele Horn,
    *History Will Repeat Itself* (Frankfurt am Main: Revolver, 2007), p.
    42.[]{#Page_187 type="pagebreak" title="187"}

    [28](#c2-note-0028a){#c2-note-0028}  See the film *The Battle of
    Orgreave* (2001), directed by Mike Figgis.

    [29](#c2-note-0029a){#c2-note-0029}  Theresa Winge, "Costuming the
    Imagination: Origins of Anime and Manga Cosplay," *Mechademia* 1 (2006),
    pp. 65--76.

    [30](#c2-note-0030a){#c2-note-0030}  Nicolle Lamerichs, "Stranger than
    Fiction: Fan Identity in Cosplay," *Transformative Works and Cultures* 7
    (2011), online.

    [31](#c2-note-0031a){#c2-note-0031}  The *Oxford English Dictionary*
    defines "selfie" as a "photographic self-portrait; *esp*. one taken with
    a smartphone or webcam and shared via social media."

    [32](#c2-note-0032a){#c2-note-0032}  Odin Kroeger et al. (eds),
    *Geistiges Eigentum und Originalität: Zur Politik der Wissens- und
    Kulturproduktion* (Vienna: Turia + Kant, 2011).

    [33](#c2-note-0033a){#c2-note-0033}  Roland Barthes, "The Death of the
    Author," in Barthes, *Image -- Music -- Text*, trans. Stephen Heath
    (London: Fontana Press, 1977), pp. 142--8.

    [34](#c2-note-0034a){#c2-note-0034}  Heinz Rölleke and Albert
    Schindehütte, *Es war einmal: Die wahren Märchen der Brüder Grimm und
    wer sie ihnen erzählte* (Frankfurt am Main: Eichborn, 2011); and Heiner
    Boehncke, *Marie Hassenpflug: Eine Märchenerzählerin der Brüder Grimm*
    (Darmstadt: Von Zabern, 2013).

    [35](#c2-note-0035a){#c2-note-0035}  Hansjörg Ewert, "Alles nur
    geklaut?", *Zeit Online* (February 26, 2013), online. This is not a new
    realization but has long been a special area of research for
    musicologists. What is new, however, is that it is no longer
    controversial outside of this narrow disciplinary discourse. See Peter
    J. Burkholder, "The Uses of Existing Music: Musical Borrowing as a
    Field," *Notes* 50 (1994), pp. 851--70.

    [36](#c2-note-0036a){#c2-note-0036}  Zygmunt Bauman, *Liquid Modernity*
    (Cambridge: Polity, 2000), p. 56.

    [37](#c2-note-0037a){#c2-note-0037}  Quoted from Eran Schaerf\'s audio
    installation *FM-Scenario: Reality Race* (2013), online.

    [38](#c2-note-0038a){#c2-note-0038}  The number of members, for
    instance, of the two large polit­ical parties in Germany, the Social
    Democratic Party and the Christian Democratic Union, reached its peak at
    the end of the 1970s or the beginning of the 1980s. Both were able to
    increase their absolute numbers for a brief time at the beginning of the
    1990s, when the Christian Democratic Party even reached its absolute
    high point, but this can be explained by a surge in new members after
    reunification. By 2010, both parties already had fewer members than
    Greenpeace, whose 580,000 members make it Germany's largest NGO.
    Parallel to this, between 1970 and 2010, the proportion of people
    without any religious affiliations shrank to approximately 37 percent.
    That there are more churches and political parties today is indicative
    of how difficult []{#Page_188 type="pagebreak" title="188"}it has become
    for any single organization to attract broad strata of society.

    [39](#c2-note-0039a){#c2-note-0039}  Ulrich Beck, *Risk Society: Towards
    a New Modernity*, trans. Mark Ritter (London: SAGE, 1992), p. 135.

    [40](#c2-note-0040a){#c2-note-0040}  Ferdinand Tönnies, *Community and
    Society*, trans. Charles P. Loomis (East Lansing: Michigan State
    University Press, 1957).

    [41](#c2-note-0041a){#c2-note-0041}  Karl Marx and Friedrich Engels,
    "The Manifesto of the Communist Party (1848)," trans. Terrell Carver, in
    *The Cambridge Companion to the Communist Manifesto*, ed. Carver and
    James Farr (Cambridge: Cambridge University Press, 2015), pp. 237--60,
    at 239. For Marx and Engels, this was -- like everything pertaining to
    the dynamics of capitalism -- a thoroughly ambivalent development. For,
    in this case, it finally forced people "to take a down-to-earth view of
    their circumstances, their multifarious relationships" (ibid.).

    [42](#c2-note-0042a){#c2-note-0042}  As early as the 1940s, Karl Polanyi
    demonstrated in *The Great Transformation* (New York: Farrar & Rinehart,
    1944) that the idea of strictly separated spheres, which are supposed to
    be so typical of society, is in fact highly ideological. He argued above
    all that the attempt to implement this separation fully and consistently
    in the form of the free market would destroy the foundations of society
    because both the life of workers and the environment of the market
    itself would be regarded as externalities. For a recent adaptation of
    this argument, see David Graeber, *Debt: The First 5000 Years* (New
    York: Melville House, 2011).

    [43](#c2-note-0043a){#c2-note-0043}  Tönnies's persistent influence can
    be felt, for instance, in Zygmunt Bauman's negative assessment of the
    compunction to strive for community in his *Community: Seeking Safety in
    an Insecure World* (Malden, MA: Blackwell, 2001).

    [44](#c2-note-0044a){#c2-note-0044}  See, for example, Amitai Etzioni,
    *The Third Way to a Good Society* (London: Demos, 2000).

    [45](#c2-note-0045a){#c2-note-0045}  Jean Lave and Étienne Wenger,
    *Situated Learning: Legitimate Peripheral Participation* (Cambridge:
    Cambridge University Press, 1991), p. 98.

    [46](#c2-note-0046a){#c2-note-0046}  Étienne Wenger, *Cultivating
    Communities of Practice: A Guide to Managing Knowledge* (Boston, MA:
    Harvard Business School Press, 2000).

    [47](#c2-note-0047a){#c2-note-0047}  The institutions of the
    disciplinary society -- schools, factories, prisons and hospitals, for
    instance -- were closed. Whoever was inside could not get out.
    Participation was obligatory, and instructions had to be followed. See
    Michel Foucault, *Discipline and Punish: The Birth of the Prison*,
    trans. Alan Sheridan (New York: Pantheon Books, 1977).[]{#Page_189
    type="pagebreak" title="189"}

    [48](#c2-note-0048a){#c2-note-0048}  Weber famously defined power as
    follows: "Power is the probability that one actor within a social
    relationship will be in a position to carry out his own will despite
    resistance, regardless of the basis on which this probability rests."
    Max Weber, *Economy and Society: An Outline of Interpretive Sociology*,
    trans. Guenther Roth and Claus Wittich (Berkeley, CA: University of
    California Press, 1978), p. 53.

    [49](#c2-note-0049a){#c2-note-0049}  For those in complete despair, the
    following tip is provided: "To get more likes, start liking the photos
    of random people." Such a strategy, it seems, is more likely to increase
    than decrease one's hopelessness. The quotations are from "How to Get
    More Likes on Your Instagram Photos," *WikiHow* (2016), online.

    [50](#c2-note-0050a){#c2-note-0050}  Jeremy Gilbert, *Democracy and
    Collectivity in an Age of Individualism* (London: Pluto Books, 2013).

    [51](#c2-note-0051a){#c2-note-0051}  Diedrich Diederichsen,
    *Eigenblutdoping: Selbstverwertung, Künstlerromantik, Partizipation*
    (Cologne: Kiepenheuer & Witsch, 2008).

    [52](#c2-note-0052a){#c2-note-0052}  Harrison Rainie and Barry Wellman,
    *Networked: The New Social Operating System* (Cambridge, MA: MIT Press,
    2012). The term is practical because it is easy to understand, but it is
    also conceptually contradictory. An individual (an indivisible entity)
    cannot be defined in terms of a distributed network. With a nod toward
    Gilles Deleuze, the cumbersome but theoretically more precise term
    "dividual" (the divisible) has also been used. See Gerald Raunig,
    "Dividuen des Facebook: Das neue Begehren nach Selbstzerteilung," in
    Oliver Leistert and Theo Röhle (eds), *Generation Facebook: Über das
    Leben im Social Net* (Bielefeld: Transcript, 2011), pp. 145--59.

    [53](#c2-note-0053a){#c2-note-0053}  Jariu Saramäki et al., "Persistence
    of Social Signatures in Human Communication," *Proceedings of the
    National Academy of Sciences of the United States of America* 111
    (2014): 942--7.

    [54](#c2-note-0054a){#c2-note-0054}  The term "weak ties" derives from a
    study of where people find out information about new jobs. As the study
    shows, this information does not usually come from close friends, whose
    level of knowledge often does not differ much from that of the person
    looking for a job, but rather from loose acquaintances, whose living
    environments do not overlap much with one\'s own and who can therefore
    make information available from outside of one\'s own network. See Mark
    Granovetter, "The Strength of Weak Ties," *American Journal of
    Sociology* 78 (1973): 1360--80.

    [55](#c2-note-0055a){#c2-note-0055}  Castells, *The Power of Identity*,
    420.

    [56](#c2-note-0056a){#c2-note-0056}  Ulf Weigelt, "Darf der Chef
    ständige Erreichbarkeit ver­langen?" *Zeit Online* (June 13, 2012),
    online \[--trans.\].[]{#Page_190 type="pagebreak" title="190"}

    [57](#c2-note-0057a){#c2-note-0057}  Hartmut Rosa, *Social Acceleration:
    A New Theory of Modernity*, trans. Jonathan Trejo-Mathys (New York:
    Columbia University Press, 2013).

    [58](#c2-note-0058a){#c2-note-0058}  This technique -- "social freezing"
    -- has already become so standard that it is now regarded as way to help
    women achieve a better balance between work and family life. See Kolja
    Rudzio "Social Freezing: Ein Kind von Apple," *Zeit Online* (November 6,
    2014), online.

    [59](#c2-note-0059a){#c2-note-0059}  See the film *Into Eternity*
    (2009), directed by Michael Madsen.

    [60](#c2-note-0060a){#c2-note-0060}  Thomas S. Kuhn, *The Structure of
    Scientific Revolutions*, 3rd edn (Chicago, IL: University of Chicago
    Press, 1996).

    [61](#c2-note-0061a){#c2-note-0061}  Werner Busch and Peter Schmoock,
    *Kunst: Die Geschichte ihrer Funktionen* (Weinheim: Quadriga/Beltz,
    1987), p. 179 \[--trans.\].

    [62](#c2-note-0062a){#c2-note-0062}  "'When Attitude Becomes Form' at
    the Fondazione Prada," *Contemporary Art Daily* (September 18, 2013),
    online.

    [63](#c2-note-0063a){#c2-note-0063}  Owing to the hyper-capitalization
    of the art market, which has been going on since the 1990s, this role
    has shifted somewhat from curators to collectors, who, though validating
    their choices more on financial than on argumentative grounds, are
    essentially engaged in the same activity. Today, leading cur­ators
    usually work closely together with collectors and thus deal with more
    money than the first generation of curators ever could have imagined.

    [64](#c2-note-0064a){#c2-note-0064}  Diedrich Diederichsen, "Showfreaks
    und Monster," *Texte zur Kunst* 71 (2008): 69--77.

    [65](#c2-note-0065a){#c2-note-0065}  Alexander R. Galloway, *Protocol:
    How Control Exists after Decentralization* (Cambridge, MA: MIT Press,
    2004), pp. 7, 75.

    [66](#c2-note-0066a){#c2-note-0066}  Even the *Frankfurter Allgemeine
    Zeitung* -- at least in its online edition -- has begun to publish more
    and more articles in English. The newspaper has accepted the
    disadvantage of higher editorial costs in order to remain relevant in
    the increasingly globalized debate.

    [67](#c2-note-0067a){#c2-note-0067}  Joseph Reagle, "'Free as in
    Sexist?' Free Culture and the Gender Gap," *First Monday* 18 (2013),
    online.

    [68](#c2-note-0068a){#c2-note-0068}  Wikipedia\'s own "Editor Survey"
    from 2011 reports a women\'s quota of 9 percent. Other studies have come
    to a slightly higher number. See Benjamin Mako Hill and Aaron Shaw, "The
    Wikipedia Gender Gap Revisited: Characterizing Survey Response Bias with
    Propensity Score Estimation," *PLOS ONE* 8 (July 26, 2013), online. The
    problem is well known, and the Wikipedia Foundation has been making
    efforts to correct matters. In 2011, its goal was to increase the
    participation of women to 25 percent by 2015. This has not been
    achieved.[]{#Page_191 type="pagebreak" title="191"}

    [69](#c2-note-0069a){#c2-note-0069}  Shyong (Tony) K. Lam et al. (2011),
    "WP: Clubhouse? An Exploration of Wikipedia's Gender Imbalance,"
    *WikiSym* 11 (2011), online.

    [70](#c2-note-0070a){#c2-note-0070}  David Singh Grewal, *Network Power:
    The Social Dynamics of Globalization* (New Haven, CT: Yale University
    Press, 2008).

    [71](#c2-note-0071a){#c2-note-0071}  Ibid., p. 29.

    [72](#c2-note-0072a){#c2-note-0072}  Niklas Luhmann, *Macht im System*
    (Berlin: Suhrkamp, 2013), p. 52 \[--trans.\].

    [73](#c2-note-0073a){#c2-note-0073}  Mathieu O\'Neil, *Cyberchiefs:
    Autonomy and Authority in Online Tribes* (London: Pluto Press, 2009).

    [74](#c2-note-0074a){#c2-note-0074}  Eric Steven Raymond, "The Cathedral
    and the Bazaar," *First Monday* 3 (1998), online.

    [75](#c2-note-0075a){#c2-note-0075}  Jorge Luis Borges, "The Library of
    Babel," trans. Anthony Kerrigan, in Borges, *Ficciones* (New York: Grove
    Weidenfeld, 1962), pp. 79--88.

    [76](#c2-note-0076a){#c2-note-0076}  Heinrich Geiselberger and Tobias
    Moorstedt (eds), *Big Data: Das neue Versprechen der Allwissenheit*
    (Berlin: Suhrkamp, 2013).

    [77](#c2-note-0077a){#c2-note-0077}  This is one of the central tenets
    of science and technology studies. See, for instance, Geoffrey C. Bowker
    and Susan Leigh Star, *Sorting Things Out: Classification and Its
    Consequences* (Cambridge, MA: MIT Press, 1999).

    [78](#c2-note-0078a){#c2-note-0078}  Sybille Krämer, *Symbolische
    Maschinen: Die Idee der Formalisierung in geschichtlichem Abriß*
    (Darmstadt: Wissenschaft­liche Buchgesellschaft, 1988), 50--69.

    [79](#c2-note-0079a){#c2-note-0079}  Quoted from Doron Swade, "The
    'Unerring Certainty of Mechanical Agency': Machines and Table Making in
    the Nineteenth Century," in Martin Campbell-Kelly et al. (eds), *The
    History of Mathematical Tables: From Sumer to Spreadsheets* (Oxford:
    Oxford University Press, 2003), pp. 145--76, at 150.

    [80](#c2-note-0080a){#c2-note-0080}  The mechanical construction
    suggested by Leibniz was not to be realized as a practically usable (and
    therefore patentable) calculating machine until 1820, by which point it
    was referred to as an "arithmometer."

    [81](#c2-note-0081a){#c2-note-0081}  Krämer, *Symbolische Maschinen*, 98
    \[--trans.\].

    [82](#c2-note-0082a){#c2-note-0082}  Charles Babbage, *On the Economy of
    Machinery and Manufactures* (London: Charles Knight, 1832), p. 153: "We
    have already mentioned what may, perhaps, appear paradoxical to some of
    our readers -- that the division of labour can be applied with equal
    success to mental operations, and that it ensures, by its adoption, the
    same economy of time."

    [83](#c2-note-0083a){#c2-note-0083}  This structure, which is known as
    "Von Neumann architecture," continues to form the basis of almost all
    computers.

    [84](#c2-note-0084a){#c2-note-0084}  "Gordon Moore Says Aloha to
    Moore\'s Law," *The Inquirer* (April 13, 2005), online.[]{#Page_192
    type="pagebreak" title="192"}

    [85](#c2-note-0085a){#c2-note-0085}  Miriam Meckel, *Next: Erinnerungen
    an eine Zukunft ohne uns* (Reinbeck bei Hamburg: Rowohlt, 2011). One
    could also say that this anxiety has been caused by the fact that the
    automation of labor has begun to affect middle-class jobs as well.

    [86](#c2-note-0086a){#c2-note-0086}  Steven Levy, "Can an Algorithm
    Write a Better News Story than a Human Reporter?" *Wired* (April 24,
    2012), online.

    [87](#c2-note-0087a){#c2-note-0087}  Alexander Pschera, *Animal
    Internet: Nature and the Digital Revolution*, trans. Elisabeth Laufer
    (New York: New Vessel Press, 2016).

    [88](#c2-note-0088a){#c2-note-0088}  The American intelligence services
    are not unique in this regard. *Spiegel* has reported that, in Russia,
    entire "bot armies" have been mobilized for the "propaganda battle."
    Benjamin Bidder, "Nemzow-Mord: Die Propaganda der russischen Hardliner,"
    *Spiegel Online* (February 28, 2015), online.

    [89](#c2-note-0089a){#c2-note-0089}  Lennart Guldbrandsson, "Swedish
    Wikipedia Surpasses 1 Million Articles with Aid of Article Creation
    Bot," [blog.wikimedia.org](http://blog.wikimedia.org) (June 17, 2013),
    online.

    [90](#c2-note-0090a){#c2-note-0090}  Thomas Bunnell, "The Mathematics of
    Film," *Boom Magazine* (November 2007): 48--51.

    [91](#c2-note-0091a){#c2-note-0091}  Christopher Steiner, "Automatons
    Get Creative," *Wall Street Journal* (August 17, 2012), online.

    [92](#c2-note-0092a){#c2-note-0092}  "The Hewlett Foundation: Automated
    Essay Scoring," [kaggle.com](http://kaggle.com) (February 10, 2012),
    online.

    [93](#c2-note-0093a){#c2-note-0093}  Ian Ayres, *Super Crunchers: How
    Anything Can Be Predicted* (London: Bookpoint, 2007).

    [94](#c2-note-0094a){#c2-note-0094}  Each of these models was tested on
    the basis of the 50 million most common search terms from the years
    2003--8 and classified according to the time and place of the search.
    The results were compared with data from the health authorities. See
    Jeremy Ginsberg et al., "Detecting Influenza Epidemics Using Search
    Engine Query Data," *Nature* 457 (2009): 1012--4.

    [95](#c2-note-0095a){#c2-note-0095}  In absolute terms, the rate of
    correct hits, at 15.8 percent, was still relatively low. With the same
    dataset, however, random guessing would only have an accuracy of 0.005
    percent. See V. Le Quoc et al., "Building High-Level Features Using
    Large-Scale Unsupervised Learning,"
    [research.google.com](http://research.google.com) (2012), online.

    [96](#c2-note-0096a){#c2-note-0096}  Neil Johnson et al., "Abrupt Rise
    of New Machine Ecology beyond Human Response Time," *Nature: Scientific
    Reports* 3 (2013), online. The authors counted 18,520 of these events
    between January 2006 and February 2011; that is, about 15 per day on
    average.

    [97](#c2-note-0097a){#c2-note-0097}  Gerald Nestler, "Mayhem in Mahwah:
    The Case of the Flash Crash; or, Forensic Re-performance in Deep Time,"
    in Anselm []{#Page_193 type="pagebreak" title="193"}Franke et al. (eds),
    *Forensis: The Architecture of Public Truth* (Berlin: Sternberg Press,
    2014), pp. 125--46.

    [98](#c2-note-0098a){#c2-note-0098}  Another facial recognition
    algorithm by Google provides a good impression of the rate of progress.
    As early as 2011, the latter was able to identify dogs in images with 80
    percent accuracy. Three years later, this rate had not only increased to
    93.5 percent (which corresponds to human capabilities), but the
    algorithm could also identify more than 200 different types of dog,
    something that hardly any person can do. See Robert McMillan, "This Guy
    Beat Google\'s Super-Smart AI -- But It Wasn\'t Easy," *Wired* (January
    15, 2015), online.

    [99](#c2-note-0099a){#c2-note-0099}  Sergey Brin and Lawrence Page, "The
    Anatomy of a Large-Scale Hypertextual Web Search Engine," *Computer
    Networks and ISDN Systems* 30 (1998): 107--17.

    [100](#c2-note-0100a){#c2-note-0100}  Eugene Garfield, "Citation Indexes
    for Science: A New Dimension in Documentation through Association of
    Ideas," *Science* 122 (1955): 108--11.

    [101](#c2-note-0101a){#c2-note-0101}  Since 1964, the data necessary for
    this has been published as the Science Citation Index (SCI).

    [102](#c2-note-0102a){#c2-note-0102}  The assumption that the subjects
    produce these structures indirectly and without any strategic intention
    has proven to be problematic in both contexts. In the world of science,
    there are so-called citation cartels -- groups of scientists who
    frequently refer to one another\'s work in order to improve their
    respective position in the SCI. Search engines have likewise given rise
    to search engine optimizers, which attempt by various means to optimize
    a website\'s evaluation by search engines.

    [103](#c2-note-0103a){#c2-note-0103}  Regarding the history of the SCI
    and its influence on the early version of Google\'s PageRank, see Katja
    Mayer, "Zur Soziometrik der Suchmaschinen: Ein historischer Überblick
    der Methodik," in Konrad Becker and Felix Stalder (eds), *Deep Search:
    Die Politik des Suchens jenseits von Google* (Innsbruck: Studienverlag,
    2009), pp. 64--83.

    [104](#c2-note-0104a){#c2-note-0104}  A site with zero links to it could
    not be registered by the algorithm at all, for the search engine indexed
    the web by having its "crawler" follow the links itself.

    [105](#c2-note-0105a){#c2-note-0105}  "Google Algorithm Change History,"
    [moz.com](http://moz.com) (2016), online.

    [106](#c2-note-0106a){#c2-note-0106}  Martin Feuz et al., "Personal Web
    Searching in the Age of Semantic Capitalism: Diagnosing the Mechanisms
    of Personalisation," *First Monday* 17 (2011), online.

    [107](#c2-note-0107a){#c2-note-0107}  Brian Dean, "Google\'s 200 Ranking
    Factors," *Search Engine Journal* (May 31, 2013), online.

    [108](#c2-note-0108a){#c2-note-0108}  Thus, it is not only the world of
    advertising that motivates the collection of personal information. Such
    information is also needed for the development of personalized
    algorithms that []{#Page_194 type="pagebreak" title="194"}give order to
    the flood of data. It can therefore be assumed that the rampant
    collection of personal information will not cease or slow down even if
    commercial demands happen to change, for instance to a business model
    that is not based on advertising.

    [109](#c2-note-0109a){#c2-note-0109}  For a detailed discussion of how
    these three levels are recorded, see Felix Stalder and Christine Mayer,
    "Der zweite Index: Suchmaschinen, Personalisierung und Überwachung," in
    Konrad Becker and Felix Stalder (eds), *Deep Search: Die Politik des
    Suchens jenseits von Google* (Innsbruck: Studienverlag, 2009), pp.
    112--31.

    [110](#c2-note-0110a){#c2-note-0110}  This raises the question of which
    drivers should be sent on a detour, so that no traffic jam comes about,
    and which should be shown the most direct route, which would now be
    traffic-free.

    [111](#c2-note-0111a){#c2-note-0111}  Pamela Vaughan, "Demystifying How
    Facebook\'s EdgeRank Algorithm Works," *HubSpot* (April 23, 2013),
    online.

    [112](#c2-note-0112a){#c2-note-0112}  Lisa Gitelman (ed.), *"Raw Data"
    Is an Oxymoron* (Cambridge, MA: MIT Press, 2013).

    [113](#c2-note-0113a){#c2-note-0113}  The terms "raw," in the sense of
    unprocessed, and "cooked," in the sense of processed, derive from the
    anthropologist Claude Lévi-Strauss, who introduced them to clarify the
    difference between nature and culture. See Claude Lévi-Strauss, *The Raw
    and the Cooked*, trans. John Weightman and Doreen Weightman (Chicago,
    IL: University of Chicago Press, 1983).

    [114](#c2-note-0114a){#c2-note-0114}  Jessica Lee, "No. 1 Position in
    Google Gets 33% of Search Traffic," *Search Engine Watch* (June 20,
    2013), online.

    [115](#c2-note-0115a){#c2-note-0115}  One estimate that continues to be
    cited quite often is already obsolete: Michael K. Bergman, "White Paper
    -- The Deep Web: Surfacing Hidden Value," *Journal of Electronic
    Publishing* 7 (2001), online. The more content is dynamically generated
    by databases, the more questionable such estimates become. It is
    uncontested, however, that only a small portion of online information is
    registered by search engines.

    [116](#c2-note-0116a){#c2-note-0116}  Theo Röhle, "Die Demontage der
    Gatekeeper: Relationale Perspektiven zur Macht der Suchmaschinen," in
    Konrad Becker and Felix Stalder (eds), *Deep Search: Die Politik des
    Suchens jenseits von Google* (Innsbruck: Studienverlag, 2009), pp.
    133--48.

    [117](#c2-note-0117a){#c2-note-0117}  The phenomenon of preparing the
    world to be recorded by algorithms is not restricted to digital
    networks. As early as 1994 in Germany, for instance, a new sort of
    typeface was introduced (the *Fälschungserschwerende Schrift*,
    "forgery-impeding typeface") on license plates for the sake of machine
    readability and facilitating automatic traffic control. To the human
    eye, however, it appears somewhat misshapen and
    disproportionate.[]{#Page_195 type="pagebreak" title="195"}

    [118](#c2-note-0118a){#c2-note-0118}  [Fairsearch.org](http://Fairsearch.org)
    was officially supported by several of Google\'s competitors, including
    Microsoft, TripAdvisor, and Oracle.

    [119](#c2-note-0119a){#c2-note-0119}  "Antitrust: Commission Sends
    Statement of Objections to Google on Comparison Shopping Service,"
    *European Commission: Press Release Database* (April 15, 2015), online.

    [120](#c2-note-0120a){#c2-note-0120}  Amit Singhal, "An Update to Our
    Search Algorithms," *Google Inside Search* (August 10, 2012), online. By
    the middle of 2014, according to some sources, Google had received
    around 20 million requests to remove links from its index on account of
    copyright violations.

    [121](#c2-note-0121a){#c2-note-0121}  Alexander Wragge, "Google-Ranking:
    Herabstufung ist 'Zensur light'," *iRights.info* (August 23, 2012),
    online.

    [122](#c2-note-0122a){#c2-note-0122}  Farhad Manjoo,"Amazon\'s Tactics
    Confirm Its Critics\' Worst Suspicions," *New York Times: Bits Blog*
    (May 23, 2014), online.

    [123](#c2-note-0123a){#c2-note-0123}  Lucas D. Introna and Helen
    Nissenbaum, "Shaping the Web: Why the Politics of Search Engines
    Matters," *Information Society* 16 (2000): 169--85, at 181.

    [124](#c2-note-0124a){#c2-note-0124}  Eli Pariser, *The Filter Bubble:
    How the New Personalized Web Is Changing What We Read and How We Think*
    (New York: Penguin, 2012).

    [125](#c2-note-0125a){#c2-note-0125}  Antoinette Rouvroy, "The End(s) of
    Critique: Data-Behaviourism vs. Due-Process," in Katja de Vries and
    Mireille Hilde­brandt (eds), *Privacy, Due Process and the Computational
    Turn: The Philosophy of Law Meets the Philosophy of Technology* (New
    York: Routledge, 2013), pp. 143--65.

    [126](#c2-note-0126a){#c2-note-0126}  See B. F. Skinner, *Science and
    Human Behavior* (New York: The Free Press, 1953), p. 35: "We undertake
    to predict and control the behavior of the individual organism. This is
    our 'dependent variable' -- the effect for which we are to find the
    cause. Our 'independent variables' -- the causes of behavior -- are the
    external conditions of which behavior is a function."

    [127](#c2-note-0127a){#c2-note-0127}  Nathan Jurgenson, "View from
    Nowhere: On the Cultural Ideology of Big Data," *New Inquiry* (October
    9, 2014), online.

    [128](#c2-note-0128a){#c2-note-0128}  danah boyd and Kate Crawford,
    "Critical Questions for Big Data: Provocations for a Cultural,
    Technological and Scholarly Phenomenon," *Information, Communication &
    Society* 15 (2012): 662--79.
    :::
    :::

    [III]{.chapterNumber} [Politics]{.chapterTitle} {#c3}

    ::: {.section}
    Referentiality, communality, and algorithmicity have become the
    characteristic forms of the digital condition because more and more
    people -- in more and more segments of life and by means of increasingly
    complex technologies -- are actively (or compulsorily) participating in
    the negotiation of social meaning. They are thus reacting to the demands
    of a chaotic, overwhelming sphere of information and thereby
    contributing to its greater expansion. It is the ubiquity of these forms
    that makes it possible to speak of the digital condition in the
    singular. The goals pursued in these cultural forms, however, are as
    diverse, contradictory, and conflicted as society itself. It would
    therefore be equally false to assume uniformity or an absence of
    alternatives in the unfolding of social and political developments. On
    the contrary, the idea of a lack of alternatives is an ideological
    assertion that is itself part of a specific political agenda.

    In order to resolve this ostensible contradiction between developments
    that take place in a manner that is uniform and beyond influence and
    those that are characterized by the variable and open-ended
    implementation of diverse interests, it is necessary to differentiate
    between two levels. One possibility for doing so is presented by Marxist
    political economy. It distinguishes between *productive forces*, which
    are defined as the technical infrastructure, the state of knowledge, and
    the []{#Page_125 type="pagebreak" title="125"}organization of labor, and
    the *relations of production*, which are defined as the institutions,
    laws, and practices in which people are able to realize the
    techno-cultural possibilities of their time. Both are related to one
    another, though each develops with a certain degree of autonomy. The
    relation between them is essential for the development of society. The
    closer they correspond to one another, the more smoothly this
    development will run its course; the more contradictions happen to exist
    between them, the more this course will suffer from unrest and
    conflicts. One of many examples of a current contradiction between these
    two levels is the development that has occurred in the area of cultural
    works. Whereas radical changes have taken place in their production,
    processing, and reproduction (that is, on the level of productive
    forces), copyright law (that is, the level of the relations of
    production) has remained almost unchanged. In Marxist theory, such
    contradictions are interpreted as a starting point for political
    upheavals, indeed as a precondition for revolution. As Marx wrote:

    ::: {.extract}
    At a certain stage of development, the material productive forces of
    society come into conflict with the existing relations of production or
    -- this merely expresses the same thing in legal terms -- with the
    property relations within the framework of which they have operated
    hitherto. From forms of development of the productive forces these
    relations turn into their fetters. Then begins an era of social
    revolution.[^1^](#c3-note-0001){#c3-note-0001a}
    :::

    Many theories aiming to overcome capitalism proceed on the basis of this
    dynamic.[^2^](#c3-note-0002){#c3-note-0002a} The distinction between
    productive forces and the relations of production, however, is not
    unproblematic. On the one hand, no one has managed to formulate an
    entirely convincing theory concerning the reciprocal relation between
    the two. What does it mean, exactly, that they are related to one
    another and yet are simultaneously autonomous? When does the moment
    arrive in which they come into conflict with one another? And what,
    exactly, happens then? For the most part, these are unsolved questions.
    On the other hand, because of the blending of work and leisure already
    mentioned, as well as the general economization of social activity (as
    is happening on social []{#Page_126 type="pagebreak" title="126"}mass
    media and in the creative economy, for instance), it is hardly possible
    now to draw a line between production and reproduction. Thus, this set
    of concepts, which is strictly oriented toward economic production
    alone, is more problematic than ever. My decision to use these concepts
    is therefore limited to clarifying the conceptual transition from the
    previous chapter to the chapter at hand. The concern of the last chapter
    was to explain the forms that cultural processes have adopted under the
    present conditions -- ubiquitous telecommunication, general expressivity
    (referentiality), flexible cooperation (communality), and informational
    automation (algorithmicity). In what follows, on the contrary, my focus
    will turn to the political dynamics that have emerged from the
    realization of "productive forces" as concrete "relations of production"
    or, in more general terms, as social relations. Without claiming to be
    comprehensive, I have assigned the confusing and conflicting
    multiplicity of actors, projects, and institutions to two large
    political developments: post-democracy and commons. The former is moving
    toward an essentially authoritarian society, while the latter is moving
    toward a radical renewal of democracy by broadening the scope of
    collective decision-making. Both cases involve more than just a few
    minor changes to the existing order. Rather, both are ultimately leading
    to a new political constellation beyond liberal representative
    democracy.
    :::

    ::: {.section}
    Post-democracy {#c3-sec-0002}
    --------------

    The current dominant political development is the spread and
    entrenchment of post-democracy. The term was coined in the middle of the
    1990s by Jacques Rancière. "Post-democracy," as he defined it, "is the
    government practice and conceptual legitimization of a democracy *after*
    the demos, a democracy that has eliminated the appearance, miscount and
    dispute of the people."[^3^](#c3-note-0003){#c3-note-0003a} Rancière
    argued that the immediate presence of the people (the demos) has been
    abolished and replaced by processes of simulation and modeling such as
    opinion polls, focus groups, and plans for various scenarios -- all
    guided by technocrats. Thus, he believed that the character of political
    processes has changed, namely from disputes about how we []{#Page_127
    type="pagebreak" title="127"}ought to face a principally open future to
    the administration of predefined necessities and fixed constellations.
    As early as the 1980s, Margaret Thatcher justified her radical reforms
    with the expression "There is no alternative!" Today, this form of
    argumentation remains part of the core vocabulary of post-democratic
    politics. Even Angela Merkel is happy to call her political program
    *alternativlos* ("without alternatives"). According to Rancière, this
    attitude is representative of a government practice that operates
    without the unpredictable presence of the people and their dissent
    concerning fundamental questions. All that remains is "police logic," in
    which everything is already determined, counted, and managed.

    Ten years after Rancière\'s ruminations, Colin Crouch revisited the
    concept and defined it anew. His notion of post-democracy is as follows:

    ::: {.extract}
    Under this model, while elections certainly exist and can change
    governments, public electoral debate is a tightly controlled spectacle,
    managed by rival teams of professionals expert in the technique of
    persuasion, and considering a small range of issues selected by those
    teams. The mass of citizens plays a passive, quiescent, even apathetic
    part, responding only to the signals given them. Behind this spectacle
    of the electoral game, politics is really shaped in private by
    interaction between elected governments and elites that overwhelmingly
    represent business interests.[^4^](#c3-note-0004){#c3-note-0004a}
    :::

    He goes on:

    ::: {.extract}
    My central contentions are that, while the forms of democracy remain
    fully in place and today in some respects are actually strengthened --
    politics and government are increasingly slipping back into the control
    of privileged elites in the manner characteristic of predemocratic
    times; and that one major consequence of this process is the growing
    impotence of egalitarian causes.[^5^](#c3-note-0005){#c3-note-0005a}
    :::

    In his analysis, Crouch focused on the Western political system in the
    strict sense -- parties, parliaments, governments, eligible voters --
    and in particular on the British system under Tony Blair. He described
    the development of representative democracy as a rising and declining
    curve, and he diagnosed []{#Page_128 type="pagebreak" title="128"}not
    only an erosion of democratic institutions but also a shift in the
    legitimation of public activity. In this regard, according to Crouch,
    the participation of citizens in political decision-making (input
    legitimation) has become far less important than the quality of the
    achievements that are produced for the citizens (output legitimation).
    Out of democracy -- the "dispute of the people," in Rancière\'s sense --
    emerges governance. As Crouch maintains, however, this shift was
    accompanied by a sustained weakening of public institutions, because it
    was simultaneously postulated that private actors are fundamentally more
    efficient than the state. This argument was used (and continues to be
    used) to justify taking an increasing number of services away from
    public actors and entrusting them instead to the private sphere, which
    has accordingly become more influential and powerful. One consequence of
    this has been, according to Crouch, "the collapse of self-confidence on
    the part of the state and the meaning of public authority and public
    service."[^6^](#c3-note-0006){#c3-note-0006a} Ultimately, the threat at
    hand is the abolishment of democratic institutions in the name of
    efficiency. These institutions are then replaced by technocratic
    governments without a democratic mandate, as has already happened in
    Greece, Portugal, or Ireland, where external overseers have been
    directly or indirectly determining the political situation.

    ::: {.section}
    ### Social mass media as an everyday aspect of post-democratic life {#c3-sec-0003}

    For my purposes, it is of little interest whether the concept of "public
    authority" really ought to be revived or whether and in what
    circumstances the parable of rising and declining will help us to
    understand the development of liberal
    democracy.[^7^](#c3-note-0007){#c3-note-0007a} Rather, it is necessary
    to supplement Crouch\'s approach in order to make it fruitful for our
    understanding of the digital condition, which extends greatly beyond
    democratic processes in the classical sense -- that is, with
    far-reaching decisions about issues concerning society in a formalized
    and binding manner that is legitimized by citizen participation. I will
    therefore designate as "post-democratic" all of those developments --
    wherever they are taking place -- that, although admittedly preserving
    or even providing new []{#Page_129 type="pagebreak"
    title="129"}possibilities for participation, simultaneously also
    strengthen the capacity for decision-making on levels that preclude
    co-determination. This has brought about a lasting separation between
    social participation and the institutional exertion of power. These
    developments, the everyday instances of which may often be harmless and
    banal, create as a whole the cultural preconditions and experiences that
    make post-democracy -- both in Crouch\'s strict sense and the broader
    sense of Rancière -- seem normal and acceptable.

    In an almost ideal-typical form, the developments in question can be
    traced alongside the rise of commercially driven social mass media.
    Their shape, however, is not a matter of destiny (it is not the result
    of any technological imperative) but rather the consequence of a
    specific political, economic, and technical constellation that realized
    the possibilities of the present (productive forces) in particular
    institutional forms (relations of production) and was driven to do so in
    the interest of maximizing profit and control. A brief look at the
    history of digital communication will be enough to clarify this. In the
    middle of the 1990s, the architecture of the internet was largely
    decentralized and based on open protocols. The attempts of America
    Online (AOL) and CompuServe to run a closed network (an intranet, as we
    would call it today) to compete with the open internet were
    unsuccessful. The large providers never really managed to address the
    need or desire of users to become active producers of meaning. Even the
    most popular elements of these closed worlds -- the forums in which
    users could interact relatively directly with one another -- lacked the
    diversity and multiplicity of participatory options that made the open
    internet so attractive.

    One of the most popular and radical services on the open internet was
    email. The special thing about it was that electronic messages could be
    used both for private (one-to-one) and for communal (many-to-many)
    communication of all sorts, and thus it helped to merge the previously
    distinct domains of the private and the communal. By the middle of the
    1980s, and with the help of specialized software, it was possible to
    create email lists with which one could send messages efficiently and
    reliably to small and large groups. Users could join these groups
    without much effort. From the beginning, email has played a significant
    role in the creation []{#Page_130 type="pagebreak" title="130"}of
    communal formations. Email was one of the first technologies that
    enabled the horizontal coordination of large and dispersed groups, and
    it was often used to that end. Linus Torvalds\'s famous call for people
    to collaborate with him on his operating system -- which was then "just
    a hobby" but today, as Linux, makes up part of the infrastructure of the
    internet -- was issued on August 25, 1991, via email (and news groups).

    One of the most important features of email was due to the service being
    integrated into an infrastructure that was decentralized by means of
    open protocols. And so it has remained. The fundamental Simple Mail
    Transfer Protocol (SMTP), which is still being used, is based on a
    so-called Request for Comments (RFC) from 1982. In this document, which
    sketched out the new protocol and made it open to discussion, it was
    established from the outset that communication should be enabled between
    independent networks.[^8^](#c3-note-0008){#c3-note-0008a} On the basis
    of this standard, it is thus possible today for different providers to
    create an integrated space for communication. Even though they are in
    competition with one another, they nevertheless cooperate on the level
    of the technical protocol and allow users to send information back and
    forth regardless of which providers are used. A choice to switch
    providers would not cause the forfeiting of individuals\' address books
    or any data. Those who put convenience first can use one of the large
    commercial providers, or they can choose one of the many small
    commercial or non-commercial services that specialize in certain niches.
    It is even possible to set up one\'s own server in order to control this
    piece of infrastructure independently. In short, thanks to the
    competition between providers or because they themselves command the
    necessary technical know-how, users continue to have the opportunity to
    influence the infrastructure directly and thus to co-determine the
    essential (technical) parameters that allow for specific courses of
    action. Admittedly, modern email services are set up in such a way that
    most of their users remain on the surface, while the essential decisions
    about how they are able to act are made on the "back side"; that is, in
    the program code, in databases, and in configuration files. Yet these
    two levels are not structurally (that is, organizationally and
    technically) separated from one another. Whoever is willing and ready to
    []{#Page_131 type="pagebreak" title="131"}appropriate the corresponding
    and freely available technical knowledge can shift back and forth
    between them. Before the internet was made suitable for the masses, it
    had been necessary to possess such knowledge in order to use the often
    complicated and error-prone infrastructure at all.

    Over the last 10 to 15 years, these structures have been radically
    changed by commercially driven social mass media, which have been
    dominated by investors. They began to offer a variety of services in a
    user-friendly form and thus enabled the great majority of the population
    to make use of complex applications on an everyday basis. This, however,
    has gone hand in hand with the centralization of applications and user
    information. In the case of email, this happened through the
    introduction of Webmail, which always stores every individual message on
    the provider\'s computer, where they can be read and composed via web
    browsers.[^9^](#c3-note-0009){#c3-note-0009a} From that point on,
    providers have been able to follow everything that users write in their
    emails. Thanks to nearly comprehensive internet connectivity, Webmail is
    very widespread today, and the large providers -- above all Google,
    whose Gmail service had more than 500 million users in 2014 -- dominate
    the market. The gap has thus widened between user interfaces and the
    processes that take place behind them on servers and in data centers,
    and this has expanded what Crouch referred to as "the influence of the
    privileged elite." In this case, the elite are the engineers and
    managers employed by the large providers, and everyone else with access
    to the underbelly of the infrastructure, including the British
    Government Communications Headquarters (GCHQ) and the US National
    Security Agency (NSA), both of which employ programs such as a MUSCULAR
    to record data transfers between the computer centers operated by large
    American providers.[^10^](#c3-note-0010){#c3-note-0010a}

    Nevertheless, email essentially remains an open application, for the
    SMTP protocol forces even the largest providers to cooperate. Small
    providers are able to collaborate with the latter and establish new
    services with them. And this creates options. Since Edward Snowden\'s
    revelations, most people are aware that all of their online activities
    are being monitored, and this has spurred new interest in secure email
    services. In the meantime, there has been a whole series of projects
    aimed at combining simple usability with complex []{#Page_132
    type="pagebreak" title="132"}encryption in order to strengthen the
    privacy of normal users. This same goal has led to a number of
    successful crowd-funding campaigns, which indicates that both the
    interest and the resources are available to accomplish
    it.[^11^](#c3-note-0011){#c3-note-0011a} For users, however, these
    offers are only attractive if they are able to switch providers without
    great effort. Moreover, such new competition has motivated established
    providers to modify their own
    infrastructure.[^12^](#c3-note-0012){#c3-note-0012a} In the case of
    email, the level on which new user options are created is still
    relatively closely linked to that on which generally binding decisions
    are made and implemented. In this sense, email is not a post-democratic
    technology.
    :::

    ::: {.section}
    ### Centralization and the power of networks {#c3-sec-0004}

    Things are entirely different in the case of new social mass media such
    as Facebook, Twitter, LinkedIn, WhatsApp, or most of the other
    commercial services that were developed after the year 2000. Almost all
    of them are based on standards that are closed and controlled by the
    network oper­ators, and these standards prevent users from communicating
    beyond the boundaries defined by the providers. Through Facebook, it is
    only possible to be in touch with other users of the platform, and
    whoever leaves the platform will have to give up all of his or her
    Facebook friends.

    As with email, these services also rely on people producing their own
    content. By now, Facebook has more than a billion users, and each of
    them has produced at least a rudimentary personal profile and a few
    likes. Thanks to networking opportunities, which make up the most
    important service offered by all of these providers, communal formations
    can be created with ease. Every day, groups are formed that organize
    information, knowledge, and resources in order to establish self-defined
    practices (both online and offline). The immense amounts of data,
    information, and cultural references generated by this are pre-sorted by
    algorithms that operate in the background to ensure that users never
    lose their orientation.[^13^](#c3-note-0013){#c3-note-0013a} Viewed from
    the perspective of output legitimation -- that is, in terms of what
    opportunities these services provide and at what cost -- such offers are
    extremely attractive. Examined from the perspective of input
    legitimation -- that is, in terms []{#Page_133 type="pagebreak"
    title="133"}of how essential decisions are made -- things look rather
    different. By means of technical, organizational, and legal standards,
    Facebook and other operators of commercially driven social mass media
    have created structures in which the level of user interaction is
    completely separated from the level on which essential decisions are
    made that concern the community of users. Users have no way to influence
    the design or development of the conditions under which they (have to)
    act. At best, it remains possible to choose one aspect or another from a
    predetermined offer; that is, to use certain options or not. Take it or
    leave it. As to which options and features are available, users can
    neither determine this nor have any direct influence over the matter. In
    short, commercial social networks have institutionalized a power
    imbalance between those engaged with the user interface and those who
    operate the services behind the scenes. The possibility of users to
    organize themselves and exert influence -- over the way their data are
    treated, for instance -- is severely limited.

    One (nominal) exception to this happened to be Facebook itself. From
    2009 to 2012, the company allowed users to vote about any proposed
    changes to its terms and conditions, which attracted more than 7,000
    comments. If 30 percent of all registered members participated, then the
    result would be binding. In practice, however, this rule did not have
    any consequences, for the quorum was never achieved. This is no
    surprise, because Facebook did not make any effort to increase
    participation. In fact, the opposite was true. As the privacy activist
    Max Schrems has noted, without mincing words, "After grand promises of
    user participation, the ballot box was then hidden away for
    safekeeping."[^14^](#c3-note-0014){#c3-note-0014a} With reference to the
    apparent lack of interest on the part of its users, Facebook did away
    with the possibility to vote and replaced it with the option of
    directing questions to management.[^15^](#c3-note-0015){#c3-note-0015a}
    Since then, and even in the case of fundamental decisions that concern
    everyone involved, there has been no way for users to participate in the
    discussion. This new procedure, which was used to implement a
    comprehensive change in Facebook\'s privacy policy, was described by the
    company\'s founder Mark Zuckerberg as follows: "We decided that these
    would be the social norms now, and we just went for
    it."[^16^](#c3-note-0016){#c3-note-0016a} It is not exactly clear whom
    he meant by "we." What is clear, []{#Page_134 type="pagebreak"
    title="134"}however, is that the number of people involved with
    decision-making is minute in comparison with the number of people
    affected by the decisions to be made.

    It should come as no surprise that, with the introduction of every new
    feature, providers such as Facebook have further tilted the balance of
    power between users and operators. With every new version and with every
    new update, the possibilities of interaction are changed in such a way
    that, within closed networks, more data can be produced in a more
    uniform format. Thus, it becomes easier to make connections between
    them, which is their only real source of value. Facebook\'s compulsory
    "real-name" policy, for instance, which no longer permits users to
    register under a pseudonym, makes it easier for the company to create
    comprehensive user profiles. Another standard allows the companies to
    assemble, in the background, a uniform profile out of the activities of
    users on sites or applications that seem at first to have nothing to do
    with one another.[^17^](#c3-note-0017){#c3-note-0017a} Google, for
    instance, connects user data from its search function with information
    from YouTube and other online services, but also with data from Nest, a
    networked thermostat. Facebook connects data from its social network
    with those from WhatsApp, Instagram, and the virtual-reality service
    Oculus.[^18^](#c3-note-0018){#c3-note-0018a} This trend is far from
    over. Many services are offering more and more new functions for
    generating data, and entire new areas of recording data are being
    developed (think, for instance, of Google\'s self-driving car). Yet
    users have access to just a minuscule portion of the data that they
    themselves have generated and with which they are being described. This
    information is fully available to the programmers and analysts alone.
    All of this is done -- as the sanctimonious argument goes -- in the name
    of data protection.
    :::

    ::: {.section}
    ### Selling, predicting, modifying {#c3-sec-0005}

    Unequal access to information has resulted in an imbalance of power, for
    the evaluation of data opens up new possibilities for action. Such data
    can be used, first, to earn revenue from personalized advertisements;
    second, to predict user behavior with greater accuracy; and third, to
    adjust the parameters of interaction in such a way that preferred
    patterns of []{#Page_135 type="pagebreak" title="135"}behavior become
    more likely. Almost all commercially driven social mass media are
    financed by advertising. In 2014, Facebook, Google, and Twitter earned
    90 percent of their revenue through such means. It is thus important for
    these companies to learn as much as possible about their users in order
    to optimize access to them and sell this access to
    advertisers.[^19^](#c3-note-0019){#c3-note-0019a} Google and Facebook
    justify the price for advertising on their sites by claiming that they
    are able to direct the messages of advertisers precisely to those people
    who would be most susceptible to them.

    Detailed knowledge about users, moreover, also provides new
    possibilities for predicting human
    behavior.[^20^](#c3-note-0020){#c3-note-0020a} In 2014, Facebook made
    headlines by claiming that it could predict a future romantic
    relationship between two of its members, and even that it could do so
    about a hundred days before the new couple changed their profile status
    to "in a relationship." The basis of this sort of prognosis is the
    changing frequency with which two people exchange messages over the
    social network. In this regard, it does not matter whether these
    messages are private (that is, only for the two of them), semi-public
    (only for friends), or public (visible to
    everyone).[^21^](#c3-note-0021){#c3-note-0021a} Facebook and other
    social mass media are set up in such a way that those who control the
    servers are always able to see everything. All of this information,
    moreover, is formatted in such a way as to optimize its statistical
    analysis. As the amounts of data increase, even the smallest changes in
    frequencies and correlations begin to gain significance. In its study of
    romantic relationships, for instance, Facebook discovered that the
    number of online interactions reaches its peak 12 days before a
    relationship begins and hits its low point 85 days after the status
    update (probably because of an increasing number of offline
    interactions).[^22^](#c3-note-0022){#c3-note-0022a} The difference in
    the frequency of online interactions between the high point and the low
    point was just 0.14 updates per day. In other words, Facebook\'s
    statisticians could recognize and evaluate when users would post, over
    the course of seven days, one more message than they might usually
    exchange. With trad­itional methods of surveillance, which focus on
    individual people, such a small deviation would not have been detected.
    To do so, it is necessary to have immense numbers of users generating
    immense volumes of data. Accordingly, these new []{#Page_136
    type="pagebreak" title="136"}analytic possibilities do not mean that
    Facebook can accur­ately predict the behavior of a single user. The
    unique person remains difficult to calculate, for all that could be
    ascertained from this information would be a minimally different
    probability of future behavior. As regards a single person, this gain in
    knowledge would not be especially useful, for a slight change in
    probability has no predictive power on a case-by-case basis. If, in the
    case of a unique person, the probability of a particular future action
    climbs from, say, 30 to 31 percent, then not much is gained with respect
    to predicting this one person\'s behavior. If vast numbers of similar
    people are taken into account, however, then the power of prediction
    increases enormously. If, in the case of 1 million people, the
    probability of a future action increases by 1 percent, this means that,
    in the future, around 10,000 more people will act in a certain way.
    Although it may be impossible to say for sure which member of a "group"
    this might be, this is not relevant to the value of the prediction (to
    an advertising agency, for instance).

    It is also possible to influence large groups by changing the parameters
    of their informational environment. Many online news portals, for
    instance, simultaneously test multiple headlines during the first
    minutes after the publication of an article (that is, different groups
    are shown different titles for the same article). These so-called A/B
    tests are used to measure which headlines attract the most clicks. The
    most successful headline is then adopted and shown to larger
    groups.[^23^](#c3-note-0023){#c3-note-0023a} This, however, is just the
    beginning. All services are constantly changing their features for
    select focus groups without any notification, and this is happening both
    on the level of the user interface and on that of their hidden
    infrastructure. In this way, reactions can be tested in order to
    determine whether a given change should be implemented more broadly or
    rejected. If these experiments and interventions are undertaken with
    commercial intentions -- to improve the placement of advertisements, for
    instance -- then they hardly trigger any special reactions. Users will
    grumble when their customary pro­cedures are changed, but this is
    usually a matter of short-term irritation, for users know that they can
    hardly do anything about it beyond expressing their discontent. A
    greater stir was caused by an experiment conducted in the middle of
    2014, []{#Page_137 type="pagebreak" title="137"}for which Facebook
    manipulated the timelines of 689,003 of its users, approximately 0.04
    percent of all members. The selected members were divided into two
    groups, one of which received more "positive" messages from their circle
    of friends while the other received more "negative" messages. For a
    control group, the filter settings were left unchanged. The goal was to
    investigate whether, without any direct interaction and non-verbal cues
    (mimicry, for example), the mood of a user could be influenced by the
    mood that he or she perceives in others -- that is, whether so-called
    "emotional contagion," which had hitherto only been demonstrated in the
    case of small and physically present groups, also took place online. The
    answer, according to the results of the study, was a resounding
    "yes."[^24^](#c3-note-0024){#c3-note-0024a} Another conclusion, though
    one that the researchers left unexpressed, is that Facebook can
    influence this process in a controlled manner. Here, it is of little
    interest whether it is genuinely possible to manipulate the emotional
    condition of someone posting on Facebook by increasing the presence of
    certain key words, or whether the presence of these words simply
    increases the social pressure for someone to appear in a better or worse
    mood.[^25^](#c3-note-0025){#c3-note-0025a} What is striking is rather
    the complete disregard of one of the basic ethical principles of
    scientific research, namely that human subjects must be informed about
    and agree to any experiments performed on or with them ("informed
    consent"). This disregard was not a mere oversight; the authors of the
    study were alerted to the issue before publication, and the methods were
    subjected to an internal review. The result: Facebook\'s terms of use
    allow such methods, no legal claims could be made, and the modulation of
    the newsfeed by changing filter settings is so common that no one at
    Facebook could see anything especially wrong with the
    experiment.[^26^](#c3-note-0026){#c3-note-0026a}

    Why would they? All commercially driven social mass media conduct
    manipulative experiments. From the perspective of "data behaviorism,"
    this is the best way to acquire feedback from users -- far better than
    direct surveys.[^27^](#c3-note-0027){#c3-note-0027a} Facebook had also
    already conducted experiments in order to intervene directly in
    political processes. On November 2, 2010, the social mass medium tested,
    by manipulating timelines, whether it might be possible to increase
    voter turnout for the American midterm elections that were taking place
    []{#Page_138 type="pagebreak" title="138"}on that day. An application
    was surreptitiously loaded into the timelines of more than 10 million
    people that contained polling information and a list of friends who had
    already voted. It was possible to collect this data because the
    application had a built-in function that enabled people to indicate
    whether they had already cast a vote. A control group received a message
    that encouraged them to vote but lacked any personalization or the
    possibility of social interaction. This experiment, too, relied on the
    principle of "contagion." By the end of the day, those who saw that
    their friends had already voted were 0.39 percent more likely to go to
    the polls than those in the control group. In relation to a single
    person, the extent of this influence was thus extremely weak and barely
    relevant. Indeed, it would be laughable even to speak of influence at
    all if only 250 people had altered their behavior. Personal experience
    suggests that one cannot be manipulated by such things. It would be
    false to conclude, however, that such interventions are irrelevant, for
    matters are entirely different where large groups are concerned. On
    account of Facebook\'s small experiment, approximately 60,000 people
    voted who otherwise would have stayed at home, and around 340,000 extra
    votes were cast (because most people do not go to vote alone but rather
    bring along friends and family members, who vote at the same
    time).[^28^](#c3-note-0028){#c3-note-0028a} These are relevant numbers
    if the margins are narrow between the competing parties or candidates,
    especially if the people who receive the extra information and incentive
    are not -- as they were for this study -- chosen at
    random.[^29^](#c3-note-0029){#c3-note-0029a} Facebook already possesses,
    in excess, the knowledge necessary to focus on a particular target
    group, for instance on people whose sympathies lie with one party or
    another.[^30^](#c3-note-0030){#c3-note-0030a}
    :::

    ::: {.section}
    ### The dark shadow of cybernetics {#c3-sec-0006}

    Far from being unusual, the manipulation of information behind the backs
    of users is rather something that is done every day by commercially
    driven social mass media, which are not primarily channels for
    transmitting content but rather -- and above all -- environments in
    which we live. Both of the examples discussed above illustrate what is
    possible when these environments, which do not represent the world but
    []{#Page_139 type="pagebreak" title="139"}rather generate it, are
    centrally controlled, as is presently the case. Power is being exercised
    not by directly stipulating what each individual ought to do, but rather
    by altering the environment in which everyone is responsible for finding
    his or her way. The baseline of facts can be slightly skewed in order to
    increase the probability that this modified fac­ticity will, as a sort
    of social gravity, guide things in a certain direction. At work here is
    the fundamental insight of cybernetics, namely that the "target" to be
    met -- be it an enemy bomber,[^31^](#c3-note-0031){#c3-note-0031a} a
    citizen, or a customer -- orients its behavior to its environment, to
    which it is linked via feedback. From this observation, cybernetically
    oriented social planners soon drew the conclusion that the best (because
    indirect and hardly perceptible) method for influencing the "target"
    would be to alter its environment. As early as the beginning of the
    1940s, the anthropologist and cyberneticist Gregory Bateson posed the
    following question: "How would we rig the maze or problem-box so that
    the anthropomorphic rat shall obtain a repeated and reinforced
    impression of his own free will?"[^32^](#c3-note-0032){#c3-note-0032a}
    Though Bateson\'s formulation is somewhat flippant, there was a serious
    backdrop to this problem. The electoral success of the Nazis during the
    1930s seemed to have indicated that the free expression of will can have
    catastrophic political consequences. In response to this, the American
    planners of the post-war order made it their objective to steer the
    population toward (or keep it on) the path of liberal, market-oriented
    democracy without obviously undermining the legitimacy of liberal
    democracy itself, namely its basis in the individual\'s free will and
    freedom of choice. According to the French author collective Tiqqun,
    this paradox was resolved by the introduction of "a new fable that,
    after the Second World War, definitively \[...\] supplanted the liberal
    hypothesis. Contrary to the latter, it proposes to conceive biological,
    physical and social behaviors as something integrally programmed and
    re-programmable."[^33^](#c3-note-0033){#c3-note-0033a} By the term
    "liberal hypothesis," Tiqqun meant the assumption, stemming from the
    time of the Enlightenment, that people could improve themselves by
    applying their own reason and exercising their own moral faculties, and
    could free themselves from ignorance through education and reflection.
    Thus, they could become autonomous individuals and operate as free
    actors (both as market []{#Page_140 type="pagebreak"
    title="140"}participants and as citizens). The liberal hypothesis is
    based on human understanding. The cybernetic hypothesis is not. Its
    conception of humans is analogous to its conception of animals, plants,
    and machines; like the latter, people are organisms that react to
    stimuli from their environment. The hypothesis is thus associated with
    the theories of "instrumental conditioning," which had been formulated
    by behaviorists during the 1940s. In the case of both humans and other
    animals, as it was argued, learning is not a process of understanding
    but rather one of executing a pattern of stimulus and response. To learn
    is thus to adopt a pattern of behavior with which one\'s own activity
    elicits the desired reaction. In this model, understanding does not play
    any role; all that matters is
    behavior.[^34^](#c3-note-0034){#c3-note-0034a}

    And this behavior, according the cybernetic hypothesis, can be
    programmed not by directly accessing people (who are conceived as
    impenetrable black boxes) but rather by indirectly altering the
    environment, with which organisms and machines are linked via feedback.
    These interventions are usually so subtle as to not be perceived by the
    individual, and this is because there is no baseline against which it is
    possible to measure the extent to which the "baseline of facts" has been
    tilted. Search results and timelines are always being filtered and,
    owing to personalization, a search will hardly ever generate the same
    results twice. On a case-by-case basis, the effects of this are often
    minimal for the individual. In aggregate and over long periods of time,
    however, the effects can be substantial without the individual even
    being able to detect them. Yet the practice of controlling behavior by
    manipulating the environment is not limited to the environment of
    information. In their enormously influential book from 2008, *Nudge*,
    Richard Thaler and Cass Sunstein even recommended this as a general
    method for "nudging" people, almost without their notice, in the
    direction desired by central planners. To accomplish this, it is
    necessary for the environment to be redesigned by the "choice architect"
    -- by someone, for instance, who can organize the groceries in a store
    in such a way as to increase the probability that shoppers will reach
    for healthier options. They refer to this system of control as
    "libertarian paternalism" because it combines freedom of choice
    (libertarianism) with obedience []{#Page_141 type="pagebreak"
    title="141"}to an -- albeit invisible -- authority figure
    (paternalism).[^35^](#c3-note-0035){#c3-note-0035a} The ideal sought by
    the authors is a sort of unintrusive caretaking. In the spirit of
    cybernetics and in line with the structures of post-democracy, the
    expectation is for people to be moved in the experts\' chosen direction
    by means of a change to their environment, while simultaneously
    maintaining the impression that they are behaving in a free and
    autonomous manner. The compatibility of this approach with agendas on
    both sides of the political spectrum is evident in the fact that the
    Democratic president Barack Obama regularly sought Cass Sunstein\'s
    advice and, in 2009, made him the director of the Office of Information
    and Regulatory Affairs, while Richard Thaler, in 2010, was appointed to
    the advisory board of the so-called Behavioural Insights Team, which,
    known as the "nudge unit," had been founded by the Conservative prime
    minister David Cameron.

    In the case of social mass media, the ability to manipulate the
    environment is highly one-sided. It is reserved exclusively for those on
    the inside, and the latter are concerned with maximizing the profit of a
    small group and expanding their power. It is possible to regard this
    group as the inner core of the post-democratic system, consisting of
    leading figures from business, politics, and the intelligence agencies.
    Users typically experience this power, which determines the sphere of
    possibility within which their everyday activity can take place, in its
    soft form, for instance when new features are introduced that change the
    information environment. The hard form of this power only becomes
    apparent in extreme cases, for instance when a profile is suddenly
    deleted or a group is removed. This can happen on account of a rule
    whose existence does not necessarily have to be public or
    transparent,[^36^](#c3-note-0036){#c3-note-0036a} or because of an
    external intervention that will only be communicated if it is in the
    providers\' interest to do so. Such cases make it clear that, at any
    time, service providers can take away the possibilities for action that
    they offer. This results in a paradoxical experience on the part of
    users: the very environments that open up new opportunities for them in
    their personal lives prove to be entirely beyond influence when it comes
    to fundamental decisions that affect everyone. And, as the majority of
    people gradually lose the ability to co-determine how the "big
    questions" are answered, a very []{#Page_142 type="pagebreak"
    title="142"}small number of actors is becoming stronger than ever. This
    paradox of new opportunities for action and simultaneous powerlessness
    has been reflected in public debate, where there has also been much
    (one-sided) talk about empowerment and the loss of
    control.[^37^](#c3-note-0037){#c3-note-0037a} It would be better to
    discuss a shift in power that has benefited the elite at the expense of
    the vast majority of people.
    :::

    ::: {.section}
    ### Networks as monopolies {#c3-sec-0007}

    Whereas the dominance of output legitimation is new in the realm of
    politics, it is normal and seldom regarded as problematic in the world
    of business.[^38^](#c3-note-0038){#c3-note-0038a} For, at least in
    theory (that is, under the conditions of a functioning market),
    customers are able to deny the legitimacy of providers and ultimately
    choose between competing products. In the case of social mass media,
    however, there is hardly any competition, despite all of the innovation
    that is allegedly taking place. Facebook, Twitter, and many other
    platforms use closed protocols that greatly hinder the ability of their
    members to communicate with the users of competing providers. This has
    led to a situation in which the so-called *network effect* -- the fact
    that the more a network connects people with one another, the more
    useful and attractive it becomes -- has given rise to a *monopoly
    effect*: the entire network can only consist of a single provider. This
    connection between the network effect and the monopoly effect, however,
    is not inevitable, but rather fabricated. It is the closed standards
    that make it impossible to switch providers without losing access to the
    entire network and thus also to the communal formations that were
    created on its foundation. From the perspective of the user, this
    represents an extremely high barrier against leaving the network -- for,
    as discussed above, these formations now play an essential role in the
    creation of both identity and opportunities for action. From the user\'s
    standpoint, this is an all-or-nothing decision with severe consequences.
    Formally, this is still a matter of individual and free choice, for no
    one is being forced, in the classical sense, to use a particular
    provider.[^39^](#c3-note-0039){#c3-note-0039a} Yet the options for
    action are already pre-structured in such a way that free choice is no
    longer free. The majority of American teens, for example, despite
    []{#Page_143 type="pagebreak" title="143"}no longer being very
    enthusiastic about Facebook, continue using the network for fear of
    missing out on something.[^40^](#c3-note-0040){#c3-note-0040a} This
    contradiction -- voluntarily doing something that one does not really
    want to do -- and the resulting experience of failing to shape one\'s
    own activity in a coherent manner are ideal-typical manifestations of
    the power of networks.

    The problem experienced by the unwilling-willing users of Facebook has
    not been caused by the transformation of communication into data as
    such. This is necessary to provide input for algorithms, which turn the
    flood of information into something usable. To this extent, the general
    complaint about the domination of algorithms is off the mark. The
    problem is not the algorithms themselves but rather the specific
    capitalist and post-democratic setting in which they are implemented.
    They only become an instrument of domin­ation when open and
    decentralized activities are transferred into closed and centralized
    structures in which far-reaching, fundamental decision-making powers and
    possibilities for action are embedded that legitimize themselves purely
    on the basis of their output. Or, to adapt the title of Rosa von
    Praunheim\'s film, which I discussed in my first chapter: it is not the
    algorithm that is perverse, but the situation in which it lives.
    :::

    ::: {.section}
    ### Political surveillance {#c3-sec-0008}

    In June 2013, Edward Snowden exposed an additional and especially
    problematic aspect of the expansion of post-democratic structures: the
    comprehensive surveillance of the internet by government intelligence
    agencies. The latter do not use collected data primarily for commercial
    ends (although they do engage in commercial espionage) but rather for
    political repression and the protection of central power interests --
    or, to put it in more neutral terms, in the service of general security.
    Yet the NSA and other intelligence agencies also record decentralized
    communication and transform it into (meta-)data, which are centrally
    stored and analyzed.[^41^](#c3-note-0041){#c3-note-0041a} This process
    is used to generate possible courses of action, from intensifying the
    surveillance of individuals and manipulating their informational
    environment[^42^](#c3-note-0042){#c3-note-0042a} to launching military
    drones for the purpose of
    assassination.[^43^](#c3-note-0043){#c3-note-0043a} The []{#Page_144
    type="pagebreak" title="144"}great advantage of meta-data is that they
    can be standardized and thus easily evaluated by machines. This is
    especially important for intelligence agencies because, unlike social
    mass media, they do not analyze uniformly formatted and easily
    processable streams of communication. That said, the boundaries between
    post-democratic social mass media and government intelligence services
    are fluid. As is well known by now, the two realms share a number of
    continuities in personnel and commonalities with respect to their
    content.[^44^](#c3-note-0044){#c3-note-0044a} In 2010, for instance,
    Facebook\'s chief security officer left his job for a new position at
    the NSA. Personnel swapping of this sort takes place at all levels and
    is facilitated by the fact that the two sectors are engaged in nearly
    the same activity: analyzing social interactions in real time by means
    of their exclusive access to immense volumes of data. The lines of
    inquiry and the applied methods are so similar that universities,
    companies, and security organizations are able to cooperate closely with
    one another. In many cases, certain programs or analytic methods are
    just as suitable for commercial purposes as they are for intelligence
    agencies and branches of the military. This is especially apparent in
    the research that is being conducted. Scientists, businesses, and
    militaries share a common interest in discovering collective social
    dynamics as early as possible, isolating the relevant nodes (machines,
    individual people, or groups) through which these dynamics can be
    influenced, and developing strategies for specific interventions to
    achieve one goal or another. Aspects of this cooperation are publicly
    documented. Since 2011, for instance, the Defense Advanced Research
    Projects Agency (DARPA) -- the American agency that, in the 1960s,
    initiated and financed the development of the internet -- has been
    running its own research program on social mass media with the name
    Social Media in Strategic Communication. Within the framework of this
    program, more than 160 scientific studies have already been published,
    with titles such as "Automated Leadership Analysis" or "Interplay
    between Social and Topical
    Structure."[^45^](#c3-note-0045){#c3-note-0045a} Since 2009, the US
    military has been coordinating research in this field through a program
    called the Minerva Initiative, which oversees more than 70 individual
    projects.[^46^](#c3-note-0046){#c3-note-0046a} Since 2009, too, the
    European Union has been working together []{#Page_145 type="pagebreak"
    title="145"}with universities and security agencies within the framework
    of the so-called INDECT program, the goal of which is "to involve
    European scientists and researchers in the development of solutions to
    and tools for automatic threat
    detection."[^47^](#c3-note-0047){#c3-note-0047a} Research, however, is
    just one area of activity. As regards the collection of data and the
    surveillance of communication, there is also a high degree of
    cooperation between private and government actors, though it is not
    always without tension. Snowden\'s revelations have done little to
    change this. The public outcry of large internet companies over the fact
    that the NSA has been monitoring their services might be an act of
    showmanship more than anything else. Such bickering, according to the
    security expert Bruce Schneier, is "mostly role-playing designed to keep
    us blasé about what\'s really going
    on."[^48^](#c3-note-0048){#c3-note-0048a}

    Like the operators of social mass media, intelligence agencies also
    argue that their methods should be judged according to their output;
    that is, the extent to which they ensure state security. Outsiders,
    however, are hardly able to make such a judgment. Input legitimation --
    that is, the question of whether government security agencies are
    operating within the bounds of the democratically legitimized order of
    law -- seems to be playing a less significant role in the public
    discussion. In somewhat exaggerated terms, one could say that the
    disregard for fundamental rights is justified by the quality of the
    "security" that these agencies have created. Perhaps the similarity of
    the general methods and self-justifications with which service providers
    of social production, consumption, and security are constantly
    "optimized" is one reason why there has yet to be widespread public
    protest against comprehensive surveillance programs. We have been warned
    of the establishment of a "police state in reserve," which can be
    deployed at any time, but these warnings seem to have fallen on deaf
    ears.[^49^](#c3-note-0049){#c3-note-0049a}
    :::

    ::: {.section}
    ### The normalization of post-democracy {#c3-sec-0009}

    At best, it seems as though the reflex of many people is to respond to
    even fundamental political issues by considering only what might be
    useful or pleasant for themselves in the short term. Apparently, many
    people consider it normal to []{#Page_146 type="pagebreak"
    title="146"}be excluded from decisions that affect broad and significant
    areas of their life. The post-democracy of social mass media, which has
    deeply permeated the constitution of everyday life and the constitution
    of subjects, is underpinned by the ever advancing post-democracy of
    politics. It changes the expectations that citizens have for democratic
    institutions, and it makes their increasing erosion seem expected and
    normal to broad strata of society. The violation of fundamental and
    constitutional civil rights, such as those concerning the protection of
    data, is increasingly regarded as unavoidable and -- from the pragmatic
    perspective of the individual -- not so bad. This has of course
    benefited political decision-makers, who have shown little desire to
    change the situation, safeguard basic rights, and establish democratic
    control over all areas of executive
    authority.[^50^](#c3-note-0050){#c3-note-0050a}

    The spread of "smart" technologies is enabling such post-democratic
    processes and structures to permeate all areas of life. Within one\'s
    private living space, this happens through smart homes, which are still
    limited to the high end of the market, and smart meters, which have been
    implemented across all social
    strata.[^51^](#c3-note-0051){#c3-note-0051a} The latter provide
    electricity companies with detailed real-time data about a household\'s
    usage behavior and are supposed to enhance energy efficiency, but it
    remains unclear exactly how this new efficiency will be
    achieved.[^52^](#c3-note-0052){#c3-note-0052a} The concept of the "smart
    city" extends this process to entire municipalities. Over the course of
    the next few decades, for instance, Siemens predicts that "cities will
    have countless autonomous, intelligently functioning IT systems that
    will have perfect knowledge of users\' habits and energy consumption,
    and provide optimum service. \[...\] The goal of such a city is to
    optimally regulate and control resources by means of autonomous IT
    systems."[^53^](#c3-note-0053){#c3-note-0053a} According to this vision,
    the city will become a cybernetic machine, but if everything is
    "optimally" regulated and controlled, who will be left to ask in whose
    interests these autonomous systems are operating?

    Such dynamics, however, not only reorganize physical space on a small
    and a large scale; they also infiltrate human beings. Adherents of the
    Quantified Self movement work diligently to record digital information
    about their own bodies. The number of platforms that incite users to
    stay fit (and []{#Page_147 type="pagebreak" title="147"}share their data
    with companies) with competitions, point systems, and similar incentives
    has been growing steadily. It is just a small step from this hobby
    movement to a disciplinary regime that is targeted at the
    body.[^54^](#c3-note-0054){#c3-note-0054a} Imagine the possibilities of
    surveillance and sanctioning that will come about when data from
    self-optimizing applications are combined with the data available to
    insurance companies, hospitals, authorities, or employers. It does not
    take too much imagination to do so, because this is already happening in
    part today. At the end of 2014, for instance, the Generali Insurance
    Company announced a new set of services that is marketed under the name
    Vitality. People insured in Germany, France, and Austria are supposed to
    send their health information to the company and, as a reward for
    leading a "proper" lifestyle, receive a rebate on their premium. The
    long-term goal of the program is to develop "behavior-dependent tariff
    models," which would undermine the solidarity model of health
    insurance.[^55^](#c3-note-0055){#c3-note-0055a}

    According to the legal scholar Frank Pasquale, the sum of all these
    developments has led to a black-box society: More social processes are
    being controlled by algorithms whose operations are not transparent
    because they are shielded from the outside world and thus from
    democratic control.[^56^](#c3-note-0056){#c3-note-0056a} This
    ever-expanding "post-democracy" is not simply liberal democracy with a
    few problems that can be eliminated through well-intentioned reforms.
    Rather, a new social system has emerged in which allegedly relaxed
    control over social activity is compensated for by a heightened level of
    control over the data and structural conditions pertaining to the
    activity itself. In this system, both the virtual and the physical world
    are altered to achieve particular goals -- goals determined by just a
    few powerful actors -- without the inclusion of those affected by these
    changes and often without them being able to notice the changes at all.
    Whoever refuses to share his or her data freely comes to look suspicious
    and, regardless of the motivations behind this anonymity, might even be
    regarded as a potential enemy. In July 2014, for instance, the following
    remarks were included in Facebook\'s terms of use: "On Facebook people
    connect using their real names and identities. \[...\] Claiming to be
    another person \[...\] or creating multiple accounts undermines
    community []{#Page_148 type="pagebreak" title="148"}and violates
    Facebook\'s terms."[^57^](#c3-note-0057){#c3-note-0057a} For the police
    and the intelligence agencies in particular, all activities that attempt
    to evade comprehensive surveillance are generally suspicious. Even in
    Germany, people are labeled "extremists" by the NSA for the sole reason
    that they have supported the Tor Project\'s anonymity
    software.[^58^](#c3-note-0058){#c3-note-0058a} In a 2014 trial in
    Vienna, the use of a foreign pre-paid telephone was introduced as
    evidence that the defendant had attempted to conceal a crime, even
    though this is a harmless and common method for avoiding roaming charges
    while abroad.[^59^](#c3-note-0059){#c3-note-0059a} This is a sort of
    anti-mask law 2.0, and every additional terrorist attack is used to
    justify extending its reach.

    It is clear that Zygmunt Bauman\'s bleak assessment of freedom in what
    he calls "liquid modernity" -- "freedom comes when it no longer
    matters"[^60^](#c3-note-0060){#c3-note-0060a} -- can easily be modified
    to suit the digital condition: everyone can participate in cultural
    processes, because culture itself has become irrelevant. Disputes about
    shared meaning, in which negotiations are made about what is important
    to people and what ought to be achieved, have less and less influence
    over the way power is exercised. Politics has been abandoned for an
    administrative management that oscillates between paternalism and
    authoritarianism. Issues that concern the common good have been
    delegated to "autonomous IT systems" and removed from public debate. By
    now, the exercise of power, which shapes society, is based less on basic
    consensus and cultural hegemony than it is on the technocratic argument
    that "there is no alternative" and that the (informational) environment
    in which people have to orient themselves should be optimized through
    comprehensive control and manipulation -- whether they agree with this
    or not.
    :::

    ::: {.section}
    ### Forms of resistance {#c3-sec-0010}

    As far as the circumstances outlined above are concerned, Bauman\'s
    conclusion may seem justified. But as an overarching assessment of
    things, it falls somewhat short, for every form of power provokes its
    own forms of resistance.[^61^](#c3-note-0061){#c3-note-0061a} In the
    context of post-democracy under the digital condition, these forms have
    likewise shifted to the level of data, and an especially innovative and
    effective means of resistance []{#Page_149 type="pagebreak"
    title="149"}has been the "leak"; that is, the unauthorized publication
    of classified documents, usually in the form of large datasets. The most
    famous platform for this is WikiLeaks, which since 2006 has attracted
    international attention to this method with dozens of spectacular
    publications -- on corruption scandals, abuses of authority, corporate
    malfeasance, environmental damage, and war crimes. As a form of
    resistance, however, leaking entire databases is not limited to just one
    platform. In recent years and through a variety of channels, large
    amounts of data (from banks and accounting firms, for instance) have
    been made public or have been handed over to tax investigators by
    insiders. Thus, in 2014, for instance, the *Süddeutsche Zeitung*
    (operating as part of the International Consortium of Investigative
    Journalists based in Washington, DC), was not only able to analyze the
    so-called "Offshore Leaks" -- a database concerning approximately
    122,000 shell companies registered in tax
    havens[^62^](#c3-note-0062){#c3-note-0062a} -- but also the "Luxembourg
    Leaks," which consisted of 28,000 pages of documents demonstrating the
    existence of secret and extensive tax deals between national authorities
    and multinational corporations and which caused a great deal of
    difficulty for Jean-Claude Juncker, the newly elected president of the
    European Commission and former prime minister of
    Luxembourg.[^63^](#c3-note-0063){#c3-note-0063a}

    The reasons why employees or government workers have become increasingly
    willing to hand over large amounts of information to journalists or
    whistle-blowing platforms are to be sought in the contradictions of the
    current post-democratic regime. Over the past few years, the discrepancy
    in Western countries between the self-representation of democratic
    institutions and their frequently post-democratic practices has become
    even more obvious. For some people, including the former CIA employee
    Edward Snowden, this discrepancy created a moral conflict. He claimed
    that his work consisted in the large-scale investigation and monitoring
    of respectable citizens, thus systematically violating the Constitution,
    which he was supposed to be protecting. He resolved this inner conflict
    by gathering material about his own activity, then releasing it, with
    the help of journalists, to the public, so that the latter could
    understand and judge what was taking
    place.[^64^](#c3-note-0064){#c3-note-0064a} His leaks benefited from
    technical []{#Page_150 type="pagebreak" title="150"}advances, including
    the new forms of cooperation which have resulted from such advances.
    Even institutions that depend on keeping secrets, such as banks and
    intelligence agencies, have to "share" their information internally and
    rely on a large pool of technical personnel to record and process the
    massive amounts of data. To accomplish these tasks, employees need the
    fullest possible access to this information, for even the most secret
    databases have to be maintained by someone, and this also involves
    copying data. Thus, it is far easier today than it was just a few
    decades ago to smuggle large volumes of data out of an
    institution.[^65^](#c3-note-0065){#c3-note-0065a}

    This new form of leaking, however, did not become an important method of
    resistance on account of technical developments alone. In the era of big
    data, databases are the central resource not only for analyzing how the
    world is described by digital communication, but also for generating
    that communication. The power of networks in particular is organized
    through the construction of environmental conditions that operate
    simultaneously in many places. On their own, the individual commands and
    instructions are often banal and harmless, but as a whole they
    contribute to a dynamic field that is meant to produce the results
    desired by the planners who issue them. In order to reconstruct this
    process, it is necessary to have access to these large amounts of data.
    With such information at hand, it is possible to relocate the
    surreptitious operations of post-democracy into the sphere of political
    debate -- the public sphere in its emphatic, liberal sense -- and this
    needs to be done in order to strengthen democratic forces against their
    post-democratic counterparts. Ten years after WikiLeaks and three years
    after Edward Snowden\'s revelations, it remains highly questionable
    whether democratic actors are strong enough or able to muster the
    political will to use this information to tip the balance in their favor
    for the long term. Despite the forms of resistance that have arisen in
    response to these new challenges, one could be tempted to concur with
    Bauman\'s pessimistic conclusion about the irrelevance of freedom,
    especially if post-democracy were the only concrete political tendency
    of the digital condition. But it is not. There is a second political
    trend taking place, though it is not quite as well
    developed.[]{#Page_151 type="pagebreak" title="151"}
    :::
    :::

    ::: {.section}
    Commons {#c3-sec-0011}
    -------

    The digital condition includes not only post-democratic structures in
    more areas of life; it is also characterized by the development of a new
    manner of production. As early as 2002, the legal scholar Yochai Benkler
    coined the term "commons-based peer production" to describe the
    development in question.[^66^](#c3-note-0066){#c3-note-0066a} Together,
    Benkler\'s peers form what I have referred to as "communal formations":
    people joining forces voluntarily and on a fundamentally even playing
    field in order to pursue common goals. Benkler enhances this idea with
    reference to the constitutive role of the commons for many of these
    communal formations.

    As such, commons are neither new nor specifically Western. They exist in
    many cultural traditions, and thus the term is used in a wide variety of
    ways.[^67^](#c3-note-0067){#c3-note-0067a} In what follows, I will
    distinguish between three different dimensions. The first of these
    involves "common pool resources"; that is, *goods* that can be used
    communally. The second dimension is that these goods are administered by
    the "commoners"; that is, by members of *communities* who produce, use,
    and cultivate the resources. Third, this activity gives rise to forms of
    "commoning"; that is, to *practices*, *norms*, and *institutions* that
    are developed by the communities
    themselves.[^68^](#c3-note-0068){#c3-note-0068a}

    In the commons, efforts are focused on the long-term utility of goods.
    This does not mean that commons cannot also be used for the production
    of commercial products -- cheese from the milk of cows that graze on a
    common pasture, for instance, or books based on the content of Wikipedia
    articles. The relationships between the people who use a certain
    resource communally, however, are not structured through money but
    rather through direct social cooper­ation. Commons are thus
    fundamentally different from classical market-oriented institutions,
    which orient their activity primarily in response to price signals.
    Commons are also fundamentally distinct from bureaucracies -- whether in
    the form of public administration or private industry -- which are
    organized according to hierarchical chains of command. And they differ,
    too, from public institutions. Whereas the latter are concerned with
    society as a whole -- or at least that is []{#Page_152 type="pagebreak"
    title="152"}their democratic mandate -- commons are inwardly oriented
    forms that primarily exist by means and for the sake of their members.

    ::: {.section}
    ### The organization of the commons {#c3-sec-0012}

    Commoners create institutions when they join together for the sake of
    using a resource in a long-term and communal manner. In this, the
    separation of producers and consumers, which is otherwise ubiquitous,
    does not play a significant role: to different and variable extents, all
    commoners are producers and consumers of the common resources. It is an
    everyday occurrence for someone to take something from the common pool
    of resources for his or her own use, but it is understood that something
    will be created from this that, in one form or another, will flow back
    into the common pool. This process -- the reciprocal relationship
    between singular appropriation and communal provisions -- is one of the
    central dynamics within commons.

    Because commoners orient their activity neither according to price
    signals (markets) nor according to instructions or commands
    (hierarchies), social communication among the members is the most
    important means of self-organization. This communication is intended to
    achieve consensus and the voluntary acceptance of negotiated rules, for
    only in such a way is it possible to maintain the voluntary nature of
    the arrangement and to keep internal controls at a minimum. Voting,
    which is meant to legitimize the preferences of a majority, is thus
    somewhat rare, and when it does happen, it is only of subordinate
    significance. The main issue is to build consensus, and this is usually
    a complex process requiring intensive communication. One of the reasons
    why the very old practice of the commons is now being readopted and
    widely discussed is because communication-intensive and horizontal
    processes can be organized far more effectively with digital
    technologies. Thus, the idea of collective participation and
    organization beyond small groups is no longer just a utopian vision.

    The absence of price signals and chains of command causes the social
    institutions of the commons to develop complex structures for
    comprehensively integrating their members. []{#Page_153 type="pagebreak"
    title="153"}This typically involves weaving together a variety of
    economic, social, cultural, and technical dimensions. Commons realize an
    alternative to the classical separation of spheres that is so typical of
    our modern economy and society. The economy is not understood here as an
    independent realm that functions according to a different set of rules
    and with externalities, but rather as one facet of a complex and
    comprehensive phenomenon with intertwining commercial, social, ethical,
    ecological, and cultural dimensions.

    It is impossible to determine how the interplay between these three
    dimensions generally solidifies into concrete institutions.
    Historically, many different commons-based institutions were developed,
    and their number and variety have only increased under the digital
    condition. Elinor Ostrom, who was awarded the 2009 Nobel Prize in
    Economics for her work on the commons, has thus refrained from
    formulating a general model for
    them.[^69^](#c3-note-0069){#c3-note-0069a} Instead, she has identified a
    series of fundamental challenges for which all commoners have to devise
    their own solutions.[^70^](#c3-note-0070){#c3-note-0070a} For example,
    the membership of a group that communally uses a particular resource
    must be defined and, if necessary, limited. Especially in the case of
    material resources, such as pastures on which several people keep their
    animals, it is important to limit the number of members for the simple
    reason that the resource in question might otherwise be over-utilized
    (this is allegedly the "tragedy of the
    commons").[^71^](#c3-note-0071){#c3-note-0071a} Things are different
    with so-called non-rival goods, which can be consumed by one person
    without excluding its use by another. When I download and use a freely
    available word-processing program, for instance, I do not take away
    another person\'s chance to do the same. But even in the case of digital
    common goods, access is often tied to certain conditions. Whoever uses
    free software has to accept its licensing agreement.

    Internally, commons are often meritocratically oriented. Those who
    contribute more are also able to make greater use of the common good (in
    the case of material goods) or more strongly influence its development
    (in the case of informational goods). In the latter case, the
    meritocratic element takes into account the fact that the challenge does
    not lie in avoiding the over-utilization of a good, but rather in
    generating new contributions to its further development. Those who
    []{#Page_154 type="pagebreak" title="154"}contribute most to the
    provision of resources should also be able to determine their further
    course of development, and this represents an important incentive for
    these members to remain in the group. This is in the interest of all
    participants, and thus the authority of the most active members is
    seldom called into question. This does not mean, however, that there are
    no differences of opinion within commons. Here, too, reaching consensus
    can be a time-consuming process. Among the most important
    characteristics of all commons are thus mechanisms for decision-making
    that involve members in a variety of ways. The rules that govern the
    commons are established by the members themselves. This goes far beyond
    choosing between two options presented by a third party. Commons are not
    simply markets without money. All rele­vant decisions are made
    collectively within the commons, and they do not simply aggregate as the
    sum of individual decisions. Here, unlike the case of post-democratic
    structures, the levels of participation and decision-making are not
    separ­ated from one another. On the contrary, they are directly and
    explicitly connected.

    The implementation of rules and norms, even if they are the result of
    consensus, is never an entirely smooth process. It is therefore
    necessary, as Ostrom has stressed, to monitor rule compliance within
    commons and to develop a system of graded sanctions. Minor infractions
    are punished with social disapproval or small penalties, while graver
    infractions warrant stiffer penalties that can lead to a person\'s
    exclusion from the group. In order for conflicts or rule violations not
    to escalate in the commons to the extent that expulsion is the only
    option, mechanisms for conflict resolution have to be put in place. In
    the case of Wikipedia, for instance, conflicts are usually resolved
    through discussions. This is not always productive, however, for
    occasionally the "solution" turns out to be that one side or the other
    has simply given up out of exhaustion.

    A final important point is that commons do not exist in isolation from
    society. They are always part of larger social systems, which are
    normally governed by the principles of the market or subject to state
    control, and are thus in many cases oppositional to the practice of
    commoning. Political resistance is often incited by the very claim that
    a particular []{#Page_155 type="pagebreak" title="155"}good can be
    communally administered and does not belong to a single owner, but
    rather to a group that governs its own affairs. Yet without the
    recognition of the right to self-organization and without the
    corresponding legal conditions allowing this right to be perceived as
    such, commons are barely able to form at all, and existing commons are
    always at risk of being expropriated and privatized by a third party.
    This is the true "tragedy of the commons," and it happens all the
    time.[^72^](#c3-note-0072){#c3-note-0072a}
    :::

    ::: {.section}
    ### Informational common goods: free software and free culture {#c3-sec-0013}

    The term "commons" was first applied to informational goods during the
    second half of the 1990s.[^73^](#c3-note-0073){#c3-note-0073a} The
    practice of creating digital common goods, however, goes back to the
    origins of free software around the middle of the 1980s. Since then, a
    complex landscape has developed, with software codes being cooperatively
    and sustainably managed as common resources available to everyone (who
    accepts their licensing agreements). This can best be explained with an
    example. One of the oldest projects in the area of free software -- and
    one that continues to be of relevance today -- is Debian, a so-called
    "distribution" (that is, a compilation of software components) that has
    existed since 1993. According to its own website:

    ::: {.extract}
    The Debian Project is an association of individuals who have made common
    cause to create a free operating system. \[...\] An operating system is
    the set of basic programs and utilities that make your computer run.
    \[...\] Debian comes with over 43000 packages (precompiled software that
    is bundled up in a nice format for easy installation on your machine).
    \[...\] All of it free.[^74^](#c3-note-0074){#c3-note-0074a}
    :::

    The special thing about Unix-like operating systems is that they are
    composed of a very large number of independent yet interacting programs.
    The task of a distribution -- and this task is hardly trivial -- is to
    combine this modular variety into a whole that provides, in an
    integrated manner, all of the functions of a contemporary computer.
    Debian is particularly []{#Page_156 type="pagebreak"
    title="156"}important because the community sets extremely high
    standards for itself, and it is for this reason that the distribution is
    not only used by many server administrators but is also the foundation
    of numerous end-user-oriented services, including Ubuntu and Linux Mint.

    The Debian Project has developed a complex form of organization that is
    based on a set of fundamental principles defined by the members
    themselves. These are delineated in the Debian Social Contract, which
    was first formulated in 1997 and subsequently revised in
    2004.[^75^](#c3-note-0075){#c3-note-0075a} It stipulates that the
    software has to remain "100% free" at all times, in the sense that the
    software license guarantees the freedom of unlimited use, modification,
    and distribution. The developers understand this primarily as an ethical
    obligation. They explicitly regard the project as a contribution "to the
    free software community." The social contract demands transparency on
    the level of the program code: "We will keep our entire bug report
    database open for public view at all times. Reports that people file
    online will promptly become visible to others." There are both technical
    and ethical considerations behind this. The contract makes no mention at
    all of a classical production goal; there is no mention, for instance,
    of competitive products or a schedule for future developments. To put it
    in Colin Crouch\'s terms, input legitimation comes before output
    legitimation. The initiators silently assume that the project\'s basic
    ethical, technical, and social orientations will result in high quality,
    but they do not place this goal above any other.

    The Debian Social Contract is the basis for cooperation and the central
    reference point for dealing with conflicts. It forms the normative core
    of a community that is distinguished by its equal treatment of ethical,
    political, technical, and economic issues. The longer the members have
    been cooperating together on this basis, the more binding this attitude
    has become for each of them, and the more sustainable the community has
    become as a whole. In other words, it has taken on a concrete form that
    is relevant to the activities of everyday
    life.[^76^](#c3-note-0076){#c3-note-0076a} Today, Debian is a global
    project with a stable core of about a thousand developers, most of whom
    live in Europe, the United States, and Latin
    America.[^77^](#c3-note-0077){#c3-note-0077a} The Debian commons is a
    high-grade collaborative organization, []{#Page_157 type="pagebreak"
    title="157"}the necessary cooperation for which is enabled by a complex
    infrastructure that automates many routine tasks. This is the only
    efficient way to manage the program code, which has grown to more than a
    hundred million lines. Yet not everything takes place online.
    International and local meetings and conferences have long played an
    important role. These have not only been venues for exchanging
    information and planning the coordination of the project; they have also
    helped to create a sense of mutual trust, without which this form of
    voluntary collaboration would not be possible.

    Despite the considerable size of the Debian Project, it is just one part
    of a much larger institutional ecology that includes other communities,
    universities, and businesses. Most of the 43,000 software packets of the
    Debian distribution are programmed by groups of developers that do not
    belong to the Debian Project. Debian is "just" a compilation of these
    many individual programs. One of these programs written by outsiders is
    the Linux kernel, which in many respects is the central and most complex
    program within a GNU/Linux operating system. Governing the organization
    of processes and data, it thus forms the interface between hardware and
    software. An entire institutional subsystem has been built up around
    this complex program, upon which everything else depends. The community
    of developers was initiated by Linus Torvalds, who wrote the first
    rudimentary kernel in 1991. Even though most of the kernel developers
    since then have been paid for their work, their cooperation then and now
    has been voluntary and, for the vast majority of contributors, has
    functioned without monetary exchange. In order to improve collaboration,
    a specialized technological infrastructure has been used -- above all
    Torvalds\'s self-developed system Git, which automates many steps for
    managing the distributed revisions of code. In all of this, an important
    role is played by the Linux Foundation, a non-profit organization that
    takes over administrative, legal, and financial tasks for the community.
    The foundation is financed by its members, which include large software
    companies that contribute as much as \$500,000 a year. This money is
    used, for instance, to pay the most important programmers and to
    organize working groups, thus ensuring that the development and
    distribution of Linux will continue on a long-term basis. The
    []{#Page_158 type="pagebreak" title="158"}businesses that finance the
    Linux Foundation may be profit-oriented institutions, but the main work
    of the developers -- the program code -- flows back into the common pool
    of resources, which the explicitly non-profit Debian Project can then
    use to compile its distribution. The freedoms guaranteed by the free
    license render this transfer from commercial to non-commercial use not
    only legally unproblematic but even desirable to the for-profit service
    providers, as they themselves also need entire operating systems and not
    just the kernel.

    The Debian Project draws from this pool of resources and is at the same
    time a part of it. Therefore others can use Debian\'s software code,
    which happens to a large extent, for instance through other Linux
    distributions. This is not understood as competition for market share
    but rather as an expression of the community\'s vitality, which for
    Debian represents a central and normative point of pride. As the Debian
    Social Contract explicitly states, "We will allow others to create
    distributions containing both the Debian system and other works, without
    any fee."

    Thus, over the years, a multifaceted institutional landscape has been
    created in which collaboration can take place between for-profit and
    non-profit entities -- between formal organizations and informal
    communal formations. Together, they form the software commons.
    Communally, they strive to ensure that high-quality free software will
    continue to exist for the long term. The coordination necessary for this
    is not tension-free. Within individual communities, on the contrary,
    there are many conflicts and competitive disputes about people, methods,
    and strategic goals. Tensions can also run high between the communities,
    foundations, and com­panies that cooperate and compete with one another
    (sometimes more directly, sometimes less directly). To cite one example,
    the relationship between the Debian Project and Canonical, the company
    that produces the Ubuntu operating system, was strained for several
    years. At the heart of the conflict was the issue of whether Ubuntu\'s
    developers were giving enough back to the Debian Project or whether they
    were simply exploiting it. Although the Debian Social Contract expressly
    allows the commercial use of its operating system, Canonical was and
    remains dependent on the software commons functioning as []{#Page_159
    type="pagebreak" title="159"}a whole, because, after all, the company
    needs to be able to make use of the latest developments in the Debian
    system. It took years to defuse the conflict, and this was only achieved
    when forums were set up to guarantee that information and codes could
    flow in both directions. The Debian community, for example, introduced
    something called a "derivatives front desk" to improve its communication
    with programmers of distributions that, like Ubuntu, derive from Debian.
    For its part, Canonical improved its internal processes so that code
    could flow back into the Debian Project, and their systems for
    bug-tracking were partially integrated to avoid duplicates. After
    several years of strife, Raphaël Hertzog, a prominent member of the
    Debian community, was able to summarize matters as follows:

    ::: {.extract}
    The Debian--Ubuntu relationship used to be a hot topic, but that\'s no
    longer the case thanks to regular efforts made on both sides. Conflicts
    between individuals still happen, but there are multiple places where
    they can be reported and discussed \[...\]. Documentation and
    infrastructure are in place to make it easier for volunteers to do the
    right thing. Despite all those process improvements, the best results
    still come out when people build personal relationships by discussing
    what they are doing. It often leads to tight cooperation, up to commit
    rights to the source repositories. Regular contacts help build a real
    sense of cooperation that no automated process can ever hope to
    achieve.[^78^](#c3-note-0078){#c3-note-0078a}
    :::

    In all successful commons, diverse social relations, mutual trust, and a
    common culture play an important role as preconditions for the
    consensual resolution of conflicts. This is not a matter of achieving an
    ideal -- as Hertzog stressed, not every conflict can be set aside -- but
    rather of reaching pragmatic solutions that allow actors to pursue, on
    equal terms, their own divergent goals within the common project.

    The immense commons of the Debian Project encompasses a nearly
    unfathomable number of variations. The distribution is available in over
    70 languages (in comparison, Apple\'s operating system is sold in 22
    languages), and diverse versions exist to suit different application
    contexts, aesthetic preferences, hardware needs, and stability
    requirements. Within each of these versions, in turn, there are
    innumerable []{#Page_160 type="pagebreak" title="160"}variations that
    have been created by individual users with different sets of technical
    or creative skills. The final result is a continuously changing service
    that can be adapted for countless special requirements, desires, and
    other features. To outsiders, this internal differentiation is often
    difficult to comprehend, and it can soon leave the impression that there
    is little more to it than a tedious variety of essentially the same
    thing. What user would ever need 60 different text
    editors?[^79^](#c3-note-0079){#c3-note-0079a} For those who would like
    to use free software without having to join a group, a greater number of
    simple and standardized products have been made available. For
    commoners, however, this diversity is enormously important, for it is an
    expression of their fundamental freedom to work precisely on those
    problems that are closest to their hearts -- even if that means creating
    another text editor.

    With the success of free software toward the end of the 1990s, producers
    in other areas of culture, who were just starting to use the internet,
    also began to take an interest in this new manner of production. It
    seemed to be a good fit with the vibrant do-it-yourself culture that was
    blooming online, and all the more so because there were hardly any
    attractive commercial alternatives at the time. This movement was
    sustained by the growing tier of professional and non-professional
    makers of culture that had emerged over the course of the aforementioned
    transformations of the labor market. At first, many online sources were
    treated as "quasi-common goods." It was considered normal and desirable
    to appropriate them and pass them on to others without first having to
    develop a proper commons for such activity. This necessarily led to
    conflicts. Unlike free software, which on account of its licensing was
    on secure legal ground from the beginning, copyright violations were
    rampant in the new do-it-yourself culture. For the sake of engaging in
    the referential processes discussed in the previous chapter,
    copyright-protected content was (and continues to be) used, reproduced,
    and modified without permission. Around the turn of the millennium, the
    previously latent conflict between "quasi-commoners" and the holders of
    traditional copyrights became an open dispute, which in many cases was
    resolved in court. Founded in June 1999, the file-sharing service
    Napster gained, over the course of just 18 months, 25 million users
    []{#Page_161 type="pagebreak" title="161"}worldwide who simply took the
    distribution of music into their own hands without the authorization of
    copyright owners. This incited a flood of litigation that managed to
    shut the service down in July 2001. This did not, however, put an end to
    the large-scale practice of unauthorized data sharing. New services and
    technologies, many of which used (the file-sharing protocol) BitTorrent,
    quickly filled in the gap. The number of court cases skyrocketed, not
    least because new legal standards expanded the jurisdiction of copyright
    law and enabled it to be applied more
    aggressively.[^80^](#c3-note-0080){#c3-note-0080a} These conflicts
    forced a critical mass of cultural producers to deal with copyright law
    and to reconsider how the practices of sharing and modifying could be
    perpetuated in the long term. One of the first results of these
    considerations was to develop, following the model of free software,
    numerous licenses that were tailored to cultural
    production.[^81^](#c3-note-0081){#c3-note-0081a} In the cultural
    context, free licenses achieved widespread distribution after 2001 with
    the arrival of Creative Commons (CC), a California-based foundation that
    began to provide easily understandable and adaptable licensing kits and
    to promote its services internationally through a network of partner
    organizations. This set of licenses made it possible to transfer user
    rights to the community (defined by the acceptance of the license\'s
    terms and conditions) and thus to create a freely accessible pool of
    cultural resources. Works published under a CC license can always be
    consumed and distributed free of charge (though not necessarily freely).
    Some versions of the license allow works to be altered; others permit
    their commercial use; while some, in turn, only allow non-commercial use
    and distribution. In comparison with free software licenses, this
    greater emphasis on the rights of individual producers over those of the
    community, whose freedoms of use can be twice as restricted (in terms of
    the right to alter works or use them for commercial ends), gave rise to
    the long-standing critique that, with respect to freedom and
    communality, CC licenses in fact represent a
    regression.[^82^](#c3-note-0082){#c3-note-0082a} A combination of good
    timing, user-friendly implementations, and powerful support from leading
    American universities, however, resulted in CC licenses becoming the de
    facto legal standard of free culture.

    Based on a solid legal foundation and thus protected from rampant
    copyright conflicts, large and well-structured []{#Page_162
    type="pagebreak" title="162"}cultural commons were established, for
    instance around the online reference work Wikipedia (which was then,
    however, using a different license). As much as the latter is now taken
    for granted as an everyday component of informational
    life,[^83^](#c3-note-0083){#c3-note-0083a} the prospect of a
    commons-generated encyclopedia hardly seemed realistic at the beginning.
    Even the founders themselves had little faith in it, and thus Wikipedia
    began as a side project. Their primary goal was to develop an
    encyclopedia called Nupedia, for which only experts would be allowed to
    write entries, which would then have to undergo a seven-stage
    peer-review process before being published for free use. From its
    beginning, on the contrary, Wikipedia was open for anyone to edit, and
    any changes made to it were published without review or delay. By the
    time that Nupedia was abandoned in September 2003 (with only 25
    published articles), the English-language version of Wikipedia already
    consisted of more than 160,000 entries, and the German version, which
    came online in May 2001, already had 30,000. The former version reached
    1 million entries by January 2003, the latter by December 2009, and by
    the beginning of 2015 they had 4.7 million and 1.8 million entries,
    respectively. In the meantime (by August 2015), versions have been made
    available in 289 other languages, 48 of which have at least 100,000
    entries. Both its successes -- its enormous breadth of up-to-date
    content, along with its high level of acceptance and quality -- and its
    failures, with its low percentage of women editors (around 10 percent),
    exhausting discussions, complex rules, lack of young personnel, and
    systematic attempts at manipulation, have been well documented because
    Wikipedia also guarantees free access to the data generated by the
    activities of users, and thus makes the development of the commons
    fairly transparent for outsiders.[^84^](#c3-note-0084){#c3-note-0084a}

    One of the most fundamental and complex decisions in the history of
    Wikipedia was to change its license. The process behind this is
    indicative of how thoroughly the community of a commons can be involved
    in its decision-making. When Wikipedia was founded in 2001, there was no
    established license for free cultural works. The best option available
    was the GNU license for free documentation (GLFD), which had been
    developed, however, for software documentation. In the following years,
    the CC license became the standard, and this []{#Page_163
    type="pagebreak" title="163"}gave rise to the legal problem that content
    from Wikipedia could not be combined with CC-licensed works, even though
    this would have aligned with the intentions of those who had published
    content under either of these licenses. To alleviate this problem and
    thus facilitate exchange between Wikipedia and other cultural commons,
    the Wikimedia Foundation (which holds the rights to Wikipedia) proposed
    to place older content retroactively under both licenses, the GLFD and
    the equivalent CC license. In strictly legal terms, the foundation would
    have been able to make this decision without consulting the community.
    However, it would have lacked legitimacy and might have even caused
    upheavals within it. In order to avoid this, an elaborate discussion
    process was initiated that led to a membership-wide vote. This process
    lasted from December 2007 (when the Wikipedia Foundation resolved to
    change the license) to the end of May 2009, when the voting period
    concluded. All told, 17,462 votes were cast, of which only 10.5 percent
    rejected the proposed changes. More important than the result, however,
    was the way it had come about: through a long, consensus-building
    process of discussion, for which the final vote served above all to make
    the achieved consensus unambiguously
    clear.[^85^](#c3-note-0085){#c3-note-0085a} All other decisions that
    concern the project as a whole were and continue to be reached in a
    similar way. Here, too, input legitimation is at least on an equal
    footing with output legitimation.

    With Wikipedia, a great deal happens voluntarily and without cost, but
    that does not mean that no financial resources are needed to organize
    and maintain such a commons on a long-term basis. In particular, it is
    necessary to raise funds for infrastructure (hardware, administration,
    bandwidth), the employees of the Wikipedia Foundation, conferences, and
    its own project initiatives -- networking with schools, uni­versities,
    and cultural institutions, for example, or increasing the diversity of
    the Wikipedia community. In light of the number of people who use the
    encyclopedia, it would be possible to finance the project, which accrued
    costs of around 45 million dollars during the 2013--14 fiscal year,
    through advertising (in the same manner, that is, as commercial mass
    media). Yet there has always been a consensus against this. Instead,
    Wikipedia is financed through donations. In 2013--14, the website was
    able to raise \$51 million, 37 million of []{#Page_164 type="pagebreak"
    title="164"}which came from approximately 2.5 million contributors, each
    of whom donated just a small sum.[^86^](#c3-note-0086){#c3-note-0086a}
    These small contributions are especially interesting because, to a large
    extent, they come from people who consider themselves part of the
    community but do not do much editing. This suggests that donating is
    understood as an opportunity to make a contribution without having to
    invest much time in the project. In this case, donating money is thus
    not an expression of charity but rather of communal spirit; it is just
    one of a diverse number of ways to remain active in a commons. Precisely
    because its economy is not understood as an independent sphere with its
    own logic (maximizing individual resources), but rather as an integrated
    aspect of cultivating a common resource, non-financial and financial
    contributions can be treated equally. Both types of contribution
    ultimately derive from the same motivation: they are expressions of
    appre­ciation for the meaning that the common resource possesses for
    one\'s own activity.
    :::

    ::: {.section}
    ### At the interface with physical space: open data {#c3-sec-0014}

    Wikipedia, however, is an exception. None of the other new commons have
    managed to attract such large financial contributions. The project known
    as OpenStreetMap (OSM), which was founded in 2004 by Steve Coast,
    happens to be the most important commons for
    geodata.[^87^](#c3-note-0087){#c3-note-0087a} By the beginning of 2016,
    it had collected and identified around 5 billion GPS coordinates and
    linked them to more than 273 million routes. This work was accomplished
    by about half a million people, who surveyed their neighborhoods with
    hand-held GPS devices or, where that was not a possibility, extracted
    data from satellite images or from public land registries. The project,
    which is organized through specialized infrastructure and by local and
    international communities, also utilizes a number of automated
    processes. These are so important that not only was a "mechanical edit
    policy" developed to govern the use of algorithms for editing; the
    latter policy was also supplemented by an "automated edits code of
    conduct," which defines further rules of behavior. Regarding the
    implementation of a new algorithm, for instance, the code states: "We do
    not require or recommend a formal vote, but if there []{#Page_165
    type="pagebreak" title="165"}is significant objection to your plan --
    and even minorities may be significant! -- then change it or drop it
    altogether."[^88^](#c3-note-0088){#c3-note-0088a} Here, again, there is
    the typical objection to voting and a focus on building a consensus that
    does not have to be perfect but simply good enough for the overwhelming
    majority of the community to acknowledge it (a "rough consensus").
    Today, the coverage and quality of the maps that can be generated from
    these data are so good for so many areas that they now represent serious
    competition to commercial digital alternatives. OSM data are used not
    only by Wikipedia and other non-commercial projects but also
    increasingly by large commercial services that need geographical
    information and suitable maps but do not want to rely on a commercial
    provider whose terms and conditions can change at any time. To the
    extent that these commercial applications provide their users with the
    opportunity to improve the maps, their input flows back through the
    commercial level and into the common pool.

    Despite its immense community and its regular requests for donations,
    the financial resources of the OSM Foundation, which functions as the
    legal entity and supporting organ­ization behind the project, cannot be
    compared to those of the Wikipedia Foundation. The OSM Foundation has no
    employees, and in 2014 it generated just £88,000 in revenue, half of
    which was obtained from donations and half from holding
    conferences.[^89^](#c3-note-0089){#c3-note-0089a} That said, OSM is
    nevertheless a socially, technologically, and financially robust
    commons, though one with a model entirely different from Wikipedia\'s.
    Because data are at the heart of the project, its needs for hardware and
    bandwidth are negligible compared to Wikipedia\'s, and its servers can
    be housed at universities or independently operated by individual
    groups. Around this common resource, a global network of companies has
    formed that offer services on the basis of complex geodata. In doing so,
    they allow improvements to go back into the pool or, if financed by
    external sources, they can work directly on the common
    infrastructure.[^90^](#c3-note-0090){#c3-note-0090a} Here, too, we find
    the characteristic juxtaposition of paid and unpaid work, of commercial
    and non-commercial orientations that depend on the same common resource
    to pursue their divergent goals. If this goes on for a long time, then
    there will be an especially strong (self-)interest among everyone
    involved for their own work, []{#Page_166 type="pagebreak"
    title="166"}or at least part of it, to benefit the long-term development
    of the resource in question. Functioning commons, especially the new
    informational ones, are distinguished by the heterogeneity of their
    motivations and actors. Just as the Wikipedia project successfully and
    transformatively extended the experience of working with free software
    to the generation of large bases of knowledge, the community responsible
    for OpenStreetMaps succeeded in making the experiences of the Wikipedia
    project useful for the creation of a commons based on large datasets,
    and managed to adapt these experiences according to the specific needs
    of such a project.[^91^](#c3-note-0091){#c3-note-0091a}

    It is of great political significance that informational commons have
    expanded into the areas of data recording and data use. Control over
    data, which specify and describe the world in real time, is an essential
    element of the contempor­ary constitution of power. From large volumes
    of data, new types of insight can be gained and new strategies for
    action can be derived. The more one-sided access to data becomes, the
    more it yields imbalances of power.

    In this regard, the commons model offers an alternative, for it allows
    various groups equal and unobstructed access to this potential resource
    of power. This, at least, is how the Open Data movement sees things.
    Data are considered "open" if they are available to everyone without
    restriction to be used, distributed, and developed freely. For this to
    occur, it is necessary to provide data in a standard-compatible format
    that is machine-readable. Only in such a way can they be browsed by
    algorithms and further processed. Open data are an important
    precondition for implementing the power of algorithms in a democratic
    manner. They ensure that there can be an effective diversity of
    algorithms, for anyone can write his or her own algorithm or commission
    others to process data in various ways and in light of various
    interests. Because algorithms cannot be neutral, their diversity -- and
    the resulting ability to compare the results of different methods -- is
    an important precondition for them not becoming an uncontrollable
    instrument of power. This can be achieved most dependably through free
    access to data, which are maintained and cultivated as a commons.

    Motivated by the conviction that free access to data represents a
    necessary condition for autonomous activity in the []{#Page_167
    type="pagebreak" title="167"}digital condition, many new initiatives
    have formed that are devoted to the decentralized collection,
    networking, and communal organization of data. For several years, for
    instance, there has been a global community of people who observe
    airplanes in their field of vision, share this information with one
    another, and make it generally accessible. Outside of the tight
    community, these data are typically of little interest. Yet it was
    through his targeted analysis of this information that the geographer
    and artist Trevor Paglen succeeded in mapping out the secret arrests
    made by American intelligence services. Ultimately, even the CIA\'s
    clandestine airplanes have to take off and land like any others, and
    thus they can be observed.[^92^](#c3-note-0092){#c3-note-0092a} Around
    the collection of environmental data, a movement has formed whose
    adherents enter measurements themselves. To cite just one example:
    thanks to a successful crowdfunding campaign that raised more than
    \$144,000 (just 39,000 were needed), it was possible to finance the
    development of a simple set of sensors called the Air Quality Egg. This
    device can measure the concentration of carbon dioxide or nitrogen
    dioxide in the air and send its findings to a public database. It
    involves the use of relatively simple technologies that are likewise
    freely licensed (open hardware). How to build and use it is documented
    in such a detailed and user-friendly manner -- in instructional videos
    on YouTube, for instance -- that anyone so inclined can put one together
    on his or her own, and it would also be easy to have them made on a
    large scale as a commercial product. Over time, this has brought about a
    network of stations that is able to measure the quality of the air
    exactly, locally, and in places that are relevant to users. All of this
    information is stored in a global and freely accessible database, from
    which it is possible to look up and analyze hyper-local data in real
    time and without restrictions.[^93^](#c3-note-0093){#c3-note-0093a}

    A list of examples of data commons, both the successful and the
    unsuccessful, could go on and on. It will suffice, however, to point out
    that many new commons have come about that are redefining the interface
    between physical and informational space and creating new strategies for
    actions in both directions. The Air Quality Egg, which is typical in
    this regard, also demonstrates that commons can develop cumulatively.
    Free software and free hardware are preconditions for []{#Page_168
    type="pagebreak" title="168"}producing and networking such an object. No
    less import­ant are commercial and non-commercial infrastructures for
    communal learning, compiling documentation, making infor­mation
    available, and thus facilitating access for those interested and
    building up the community. All of this depends on free knowledge, from
    Wikipedia to scientific databases. This enables a great variety of
    actors -- in this case en­vironmental scientists, programmers,
    engineers, and interested citizens -- to come together and create a
    common frame of reference in which everyone can pursue his or her own
    goals and yet do so on the basis of communal resources. This, in turn,
    has given rise to a new commons, namely that of environmental data.

    Not all data can or must be collected by individuals, for a great deal
    of data already exists. That said, many scientific and state
    institutions face the problem of having data that, though nominally
    public (or at least publicly funded), are in fact extremely difficult
    for third parties to use. Such information may exist, but it is kept in
    institutions to which there is no or little public access, or it exists
    only in analog or non-machine-readable formats (as PDFs of scanned
    documents, for instance), or its use is tied to high license fees. One
    of the central demands of the Open Data and Open Access movements is
    thus to have free access to these collections. Yet there has been a
    considerable amount of resistance. Whether for political or economic
    reasons, many public and scientific institutions do not want their data
    to be freely accessible. In many cases, moreover, they also lack the
    competence, guidelines, budgets, and internal processes that would be
    necessary to make their data available to begin with. But public
    pressure has been mounting, not least through initiatives such as the
    global Open Data Index, which compares countries according to the
    accessibility of their information.[^94^](#c3-note-0094){#c3-note-0094a}
    In Germany, the Digital Openness Index evaluates states and communities
    in terms of open data, the use of open-source software, the availability
    of open infrastructures (such as free internet access in public places),
    open policies (the licensing of public information,
    freedom-of-information laws, the transparency of budget planning, etc.),
    and open education (freely accessible educational resources, for
    instance).[^95^](#c3-note-0095){#c3-note-0095a} The results are rather
    sobering. The Open Data Index has identified 10 []{#Page_169
    type="pagebreak" title="169"}different datasets that ought to be open,
    including election results, company registries, maps, and national
    statistics. A study of 97 countries revealed that, by the middle of
    2015, only 11 percent of these datasets were entirely freely accessible
    and usable.

    Although public institutions are generally slow and resistant in making
    their data freely available, important progress has nevertheless been
    made. Such progress indicates not only that the new commons have
    developed their own structures in parallel with traditional
    institutions, but also that the commoners have begun to make new demands
    on established institutions. These are intended to change their internal
    processes and their interaction with citizens in such a way that they
    support the creation and growth of commons. This is not something that
    can be achieved overnight, for the institutions in question need to
    change at a fundamental level with respect to their procedures,
    self-perception, and relation to citizens. This is easier said than
    done.
    :::

    ::: {.section}
    ### Municipal infrastructures as commons: citizen networks {#c3-sec-0015}

    The demands for open access to data, however, are not exhausted by
    attempts to redefine public institutions and civic participation. In
    fact, they go far beyond that. In Germany, for instance, there has been
    a recent movement toward (re-)communalizing the basic provision of water
    and energy. Its goal is not merely to shift the ownership structure from
    private to public. Rather, its intention is to reorient the present
    institutions so that, instead of operating entirely on the basis of
    economic criteria, they also take into account democratic, ecological,
    and social factors. These efforts reached a high point in November 2013,
    when the population of Berlin was called upon to vote over the
    communalization of the power supply. Formed in 2011, a non-partisan
    coalition of NGOs and citizens known as the Berlin Energy Roundtable had
    mobilized to take over the local energy grid, whose license was due to
    become available in 2014. The proposal was for the network to be
    administered neither entirely privately nor entirely by the public.
    Instead, the license was to be held by a newly formed municipal utility
    that would not only []{#Page_170 type="pagebreak" title="170"}organize
    the efficient operation of the grid but also pursue social causes, such
    as the struggles against energy poverty and power cuts, and support
    ecological causes, including renewable energy sources and energy
    conservation. It was intended, moreover, for the utility to be
    democratically organized; that is, for it to offer expanded
    opportunities for civic participation on the basis of the complete
    transparency of its internal processes in order to increase -- and
    ensure for the long term -- the acceptance and identification of
    citizens.

    Yet it did not get that far. Even though it was conceivably close, the
    referendum failed to go through. While 83 percent voted in favor of the
    new utility, the necessary quorum of 25 percent of all eligible voters
    was not quite achieved (the voter turnout was 24.71 percent).
    Nevertheless, the vote represented a milestone. For the first time ever
    in a large European metropolis, a specific model "beyond the market and
    the state" had been proposed for an essential aspect of everyday life
    and put before the people. A central component of infrastructure, the
    reliability of which is absolutely indispensable for life in any modern
    city, was close to being treated as a common good, supported by a new
    institution, and governed according to a statute that explicitly
    formulated economic, social, ecological, and democratic goals on equal
    terms. This would not have resulted in a commons in the strict sense,
    but rather in a new public institution that would have adopted and
    embodied the values and orientations that, because of the activity of
    commons, have increasingly become everyday phenomena in the digital
    condition.

    In its effort to develop institutional forms beyond the market and the
    state, the Berlin Energy Roundtable is hardly unique. It is rather part
    of a movement that is striving for fundamental change and is in many
    respects already quite advanced. In Denmark, for example, not only does
    a comparatively large amount of energy come from renewable sources (27.2
    percent of total use, as of 2014), but 80 percent of the country\'s
    wind-generated electricity is produced by self-administered cooperatives
    or by individual people and
    households.[^96^](#c3-note-0096){#c3-note-0096a} The latter, as is
    typical of commons, function simultaneously as producers and consumers.

    It is not a coincidence that commons have begun to infiltrate the energy
    sector. As Jeremy Rifkin has remarked:[]{#Page_171 type="pagebreak"
    title="171"}

    ::: {.extract}
    The generation that grew up on the Communication Internet and that takes
    for granted its right to create value in distributed, collaborative,
    peer-to-peer virtual commons has little hesitation about generating
    their own green electricity and sharing it on an Energy Internet. They
    find themselves living through a deepening global economic crisis and an
    even more terrifying shift in the earth\'s climate, caused by an
    economic system reliant on fossil fuel energy and managed by
    centralized, top-down command and control systems. If they fault the
    giant telecommunications, media and entertainment companies for blocking
    their right to collaborate freely with their peers in an open
    Information Commons, they are no less critical of the world\'s giant
    energy, power, and utility companies, which they blame, in part, for the
    high price of energy, a declining economy and looming environmental
    crisis.[^97^](#c3-note-0097){#c3-note-0097a}
    :::

    It is not necessary to see in this, as Rifkin and a few others have
    done, the ineluctable demise of
    capitalism.[^98^](#c3-note-0098){#c3-note-0098a} Yet, like the influence
    of post-democratic institutions over social mass media and beyond, the
    commons are also shaping new expectations about possible courses of
    action and about the institutions that might embody these possibilities.
    :::

    ::: {.section}
    ### Eroding the commons: cloud software and the sharing economy {#c3-sec-0016}

    Even if the commons have recently enjoyed a renaissance, their continued
    success is far from guaranteed. This is not only because legal
    frameworks, then and now, are not oriented toward them. Two movements
    currently stand out that threaten to undermine the commons from within
    before they can properly establish themselves. These movements have been
    exploiting certain aspects of the commons while pursuing goals that are
    harmful to them. Thus, there are ways of using communal resources in
    order to offer, on their basis, closed and centralized services. An
    example of this is so-called cloud software; that is, applications that
    no longer have to be installed on the computer of the user but rather
    are centrally run on the providers\' servers. Such programs are no
    longer operated in the traditional sense, and thus they are exempt from
    the obligations mandated by free licenses. They do not, []{#Page_172
    type="pagebreak" title="172"}in other words, have to make their readable
    source code available along with their executable program code. Cloud
    providers are thus able to make wide use of free software, but they
    contribute very little to its further development. The changes that they
    make are implemented exclusively on their own computers and therefore do
    not have to be made public. They therefore follow the letter of the
    license, but not its spirit. Through the control of services, it is also
    possible for nominally free and open-source software to be centrally
    controlled. Google\'s Android operating system for smartphones consists
    largely of free software, but by integrating it so deeply with its
    closed applications (such as Google Maps and Google Play Store), the
    company ensures that even modified versions of the system will supply
    data in which Google has an
    interest.[^99^](#c3-note-0099){#c3-note-0099a}

    The idea of the communal use and provision of resources is eroded most
    clearly by the so-called sharing economy, especially by companies such
    as the short-term lodging service Airbnb or Uber, which began as a taxi
    service but has since expanded into other areas of business. In such
    cases, terms like "open" or "sharing" do little more than give a trendy
    and positive veneer to hyper-capitalistic structures. Instead of
    supporting new forms of horizontal cooperation, the sharing economy is
    forcing more and more people into working conditions in which they have
    to assert themselves on their own, without insurance and with complete
    flexibility, all the while being coordin­ated by centralized,
    internet-based platforms.[^100^](#c3-note-0100){#c3-note-0100a} Although
    the companies in question take a significant portion of overall revenue
    for their "intermediary" services, they act as though they merely
    facilitate such work and thus take no responsibility for their "newly
    self-employed" freelance
    workforce.[^101^](#c3-note-0101){#c3-note-0101a} The risk is passed on
    to individual providers, who are in constant competition with one
    another, and this only heightens the precariousness of labor relations.
    As is typical of post-democratic institutions, the sharing economy has
    allowed certain disparities to expand into broader sectors of society,
    namely the power and income gap that exists between those who
    "voluntarily" use these services and the providers that determine the
    conditions imposed by the platforms in question.[]{#Page_173
    type="pagebreak" title="173"}
    :::
    :::

    ::: {.section}
    Against a Lack of Alternatives {#c3-sec-0017}
    ------------------------------

    For now, the digital condition has given rise to two highly divergent
    political tendencies. The tendency toward "post-democracy" is
    essentially leading to an authoritarian society. Although this society
    may admittedly contain a high degree of cultural diversity, and although
    its citizens are able to (or have to) lead their lives in a
    self-responsible manner, they are no longer able to exert any influence
    over the political and economic structures in which their lives are
    unfolding. On the basis of data-intensive and comprehensive
    surveillance, these structures are instead shaped disproportionally by
    an influential few. The resulting imbalance of power has been growing
    steadily, as has income inequality. In contrast to this, the tendency
    toward commons is leading to a renewal of democracy, based on
    institutions that exist outside of the market and the state. At its core
    this movement involves a new combination of economic, social, and
    (ever-more pressing) ecological dimensions of everyday life on the basis
    of data-intensive participatory processes.

    What these two developments share in common is their comprehensive
    realization of the infrastructural possibilities of the present. Both of
    them develop new relations of production on the basis of new productive
    forces (to revisit the terminology introduced at the beginning of this
    chapter) or, in more general terms, they create suitable social
    institutions for these new opportunities. In this sense, both
    developments represent coherent and comprehensive answers to the
    Gutenberg Galaxy\'s long-lasting crisis of cultural forms and social
    institutions.

    It remains to be seen whether one of these developments will prevail
    entirely or whether and how they will coexist. Despite all of the new
    and specialized methods for making predictions, the future is still
    largely unpredictable. Too many moving variables are at play, and they
    are constantly influencing one another. This is not least the case
    because everyone\'s activity -- at times singularly aggregated, at times
    collectively organized -- is contributing directly and indirectly to
    these contradictory developments. And even though an individual or
    communal contribution may seem small, it is still exactly []{#Page_174
    type="pagebreak" title="174"}that: a contribution to a collective
    movement in one direction or the other. This assessment should not be
    taken as some naïve appeal along the lines of "Be the change you want to
    see!" The issue here is not one of personal attitudes but rather of
    social structures. Effective change requires forms of organization that
    are able to implement it for the long term and in the face of
    resistance. In this regard, the side of the commons has a great deal
    more work to do.

    Yet if, despite all of the simplifications that I have made, this
    juxtaposition of post-democracy and the commons has revealed anything,
    it is that even rapid changes, whose historical and structural
    dimensions cannot be controlled on account of their overwhelming
    complexity, are anything but fixed in their concrete social
    formulations. Even if it is impossible to preserve the old institutions
    and cultural forms in their traditional roles -- regardless of all the
    historical achievements that may be associated with them -- the dispute
    over what world we want to live in and the goals that should be achieved
    by the available potential of the present is as open as ever. And such
    is the case even though post-democracy wishes to abolish the political
    itself and subordinate everything to a technocratic lack of
    alternatives. The development of the commons, after all, has shown that
    genuine, fundamental, and cutting-edge alternatives do indeed exist. The
    contradictory nature of the present is keeping the future
    open.[]{#Page_175 type="pagebreak" title="175"}
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#c3-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#c3-note-0001a){#c3-note-0001}  Karl Marx, *A Contribution to the
    Critique of Political Economy*, trans. S. W. Ryazanskaya (London:
    Lawrence and Wishart, 1971), p. 21.[]{#Page_196 type="pagebreak"
    title="196"}

    [2](#c3-note-0002a){#c3-note-0002}  See, for instance, Tomasz Konicz and
    Florian Rötzer (eds), *Aufbruch ins Ungewisse: Auf der Suche nach
    Alternativen zur kapitalistischen Dauerkrise* (Hanover: Heise
    Zeitschriften Verlag, 2014).

    [3](#c3-note-0003a){#c3-note-0003}  Jacques Rancière, *Disagreement:
    Politics and Philosophy*, trans. Julie Rose (Minneapolis, MN: University
    of Minnesota Press, 1999), p. 102 (the emphasis is original).

    [4](#c3-note-0004a){#c3-note-0004}  Colin Crouch, *Post-Democracy*
    (Cambridge: Polity, 2004), p. 4.

    [5](#c3-note-0005a){#c3-note-0005}  Ibid., p. 6.

    [6](#c3-note-0006a){#c3-note-0006}  Ibid., p. 96.

    [7](#c3-note-0007a){#c3-note-0007}  These questions have already been
    discussed at length, for instance in a special issue of the journal
    *Neue Soziale Be­wegungen* (vol. 4, 2006) and in the first two issues of
    the journal *Aus Politik und Zeitgeschichte* (2011).

    [8](#c3-note-0008a){#c3-note-0008}  See Jonathan B. Postel, "RFC 821,
    Simple Mail Transfer Protocol," *Information Sciences Institute:
    University of Southern California* (August 1982), online: "An important
    feature of SMTP is its capability to relay mail across transport service
    environments."

    [9](#c3-note-0009a){#c3-note-0009}  One of the first providers of
    Webmail was Hotmail, which became available in 1996. Just one year
    later, the company was purchased by Microsoft.

    [10](#c3-note-0010a){#c3-note-0010}  Barton Gellmann and Ashkan Soltani,
    "NSA Infiltrates Links to Yahoo, Google Data Centers Worldwide, Snowden
    Documents Say," *Washington Post* (October 30, 2013), online.

    [11](#c3-note-0011a){#c3-note-0011}  Initiated by hackers and activists,
    the Mailpile project raised more than \$160,000 in September 2013 (the
    fundraising goal had been just \$100,000). In July 2014, the rather
    business-oriented project ProtonMail raised \$400,000 (its target, too,
    had been just \$100,000).

    [12](#c3-note-0012a){#c3-note-0012}  In July 2014, for instance, Google
    announced that it would support "end-to-end" encryption for emails. See
    "Making End-to-End Encryption Easier to Use," *Google Security Blog*
    (June 3, 2014), online.

    [13](#c3-note-0013a){#c3-note-0013}  Not all services use algorithms to
    sort through data. Twitter does not filter the news stream of individual
    users but rather allows users to create their own lists or to rely on
    external service providers to select and configure them. This is one of
    the reasons why Twitter is regarded as "difficult." The service is so
    centralized, however, that this can change at any time, which indeed
    happened at the beginning of 2016.

    [14](#c3-note-0014a){#c3-note-0014}  Quoted from "Schrems:
    'Facebook-Abstimmung ist eine Farce'," *Futurezone.at* (July 4, 2012),
    online \[--trans.\].

    [15](#c3-note-0015a){#c3-note-0015}  Elliot Schrage, "Proposed Updates
    to Our Governing Documents," [Facebook.com](http://Facebook.com)
    (November 21, 2011), online.[]{#Page_197 type="pagebreak" title="197"}

    [16](#c3-note-0016a){#c3-note-0016}  Quoted from the documentary film
    *Terms and Conditions May Apply* (2013), directed by Cullen Hoback.

    [17](#c3-note-0017a){#c3-note-0017}  Felix Stalder and Christine Mayer,
    "Der zweite Index: Suchmaschinen, Personalisierung und Überwachung," in
    Konrad Becker and Felix Stalder (eds), *Deep Search: Die Politik des
    Suchens jenseits von Google* (Innsbruck: Studienverlag, 2009), pp.
    112--31.

    [18](#c3-note-0018a){#c3-note-0018}  Thus, in 2012, Google announced
    under a rather generic and difficult-to-Google headline that, from now
    on, "we may combine information you\'ve provided from one service with
    information from other services." See "Updating Our Privacy Policies and
    Terms of Service," *Google Official Blog* (January 24, 2012), online.

    [19](#c3-note-0019a){#c3-note-0019}  Wolfie Christl, "Kommerzielle
    digitale Überwachung im Alltag," *Studie im Auftrag der
    Bundesarbeitskammer* (November 2014), online.

    [20](#c3-note-0020a){#c3-note-0020}  Viktor Mayer-Schönberger and
    Kenneth Cukier, *Big Data: A Revolution That Will Change How We Live,
    Work and Think* (Boston, MA: Houghton Mifflin Harcourt, 2013).

    [21](#c3-note-0021a){#c3-note-0021}  Carlos Diuk, "The Formation of
    Love," *Facebook Data Science Blog* (February 14, 2014), online.

    [22](#c3-note-0022a){#c3-note-0022}  Facebook could have determined this
    simply by examining the location data that were transmitted by its own
    smartphone app. The study in question, however, did not take such
    information into account.

    [23](#c3-note-0023a){#c3-note-0023}  Dan Lyons, "A Lot of Top
    Journalists Don\'t Look at Traffic Numbers: Here\'s Why," *Huffington
    Post* (March 27, 2014), online.

    [24](#c3-note-0024a){#c3-note-0024}  Adam Kramer et al., "Experimental
    Evidence of Massive-Scale Emotional Contagion through Social Networks,"
    *Proceedings of the National Academy of Sciences* 111 (2014): 8788--90.

    [25](#c3-note-0025a){#c3-note-0025}  In all of these studies, it was
    presupposed that users present themselves naïvely and entirely
    truthfully. If someone writes something positive ("I\'m doing great!"),
    it is assumed that this person really is doing well. This, of course, is
    a highly problematic assumption. See John M. Grohl, "Emotional Contagion
    on Facebook? More Like Bad Research Methods," *PsychCentral* (June 23,
    2014), online.

    [26](#c3-note-0026a){#c3-note-0026}  See Adrienne LaFrance, "Even the
    Editor of Facebook\'s Mood Study Thought It Was Creepy," *The Atlantic*
    (June 29, 2014), online: "\[T\]he authors \[...\] said their local
    institutional review board had approved it -- and apparently on the
    grounds that Facebook apparently manipulates people\'s News Feeds all
    the time."

    [27](#c3-note-0027a){#c3-note-0027}  In a rare moment of openness, the
    founder of a large dating service made the following remark: "But guess
    what, everybody: []{#Page_198 type="pagebreak" title="198"}if you use
    the Internet, you\'re the subject of hundreds of experiments at any
    given time, on every site. That\'s how websites work." See Christian
    Rudder, "We Experiment on Human Beings!" *OKtrends* (July 28, 2014),
    online.

    [28](#c3-note-0028a){#c3-note-0028}  Zoe Corbyn, "Facebook Experiment
    Boosts US Voter Turnout," *Nature* (September 12, 2012), online. Because
    of the relative homogeneity of social groups, it can be assumed that a
    large majority of those who were indirectly influenced to vote have the
    same political preferences as those who were directly influenced.

    [29](#c3-note-0029a){#c3-note-0029}  In the year 2000, according to the
    official count, George W. Bush won the decisive state of Florida by a
    mere 537 votes.

    [30](#c3-note-0030a){#c3-note-0030}  Jonathan Zittrain, "Facebook Could
    Decide an Election without Anyone Ever Finding Out," *New Republic*
    (June 1, 2014), online.

    [31](#c3-note-0031a){#c3-note-0031}  This was the central insight that
    Norbert Wiener drew from his experiments on air defense during World War
    II. Although it could never be applied during the war itself, it would
    nevertheless prove of great importance to the development of
    cybernetics.

    [32](#c3-note-0032a){#c3-note-0032}  Gregory Bateson, "Social Planning
    and the Concept of Deutero-learning," in Bateson, *Steps to an Ecology
    of Mind: Collected Essays in Anthropology, Psychiatry, Evolution and
    Epistemology* (London: Jason Aronson, 1972), pp. 166--82, at 177.

    [33](#c3-note-0033a){#c3-note-0033}  Tiqqun, "The Cybernetic
    Hypothesis," p. 4 (online).

    [34](#c3-note-0034a){#c3-note-0034}  B. F. Skinner, *The Behavior of
    Organisms: An Experimental Analysis* (New York: Appleton Century, 1938).

    [35](#c3-note-0035a){#c3-note-0035}  Richard H. Thaler and Cass
    Sunstein, *Nudge: Improving Decisions about Health, Wealth and
    Happiness* (New York: Penguin, 2008).

    [36](#c3-note-0036a){#c3-note-0036}  It happened repeatedly, for
    instance, that pictures of breastfeeding mothers would be removed
    because they apparently violated Facebook\'s rule against sharing
    pornography. After a long protest, Facebook changed its "community
    standards" in 2014. Under the term "Nudity," it now reads as follows:
    "We also restrict some images of female breasts if they include the
    nipple, but we always allow photos of women actively engaged in
    breastfeeding or showing breasts with post-mastectomy scarring. We also
    allow photographs of paintings, sculptures and other art that depicts
    nude figures." See "Community Standards,"
    [Facebook.com](http://Facebook.com) (2017), online.

    [37](#c3-note-0037a){#c3-note-0037}  Michael Seemann, *Digital Tailspin:
    Ten Rules for the Internet after Snowden* (Amsterdam: Institute for
    Network Cultures, 2015).

    [38](#c3-note-0038a){#c3-note-0038}  The exception to this is fairtrade
    products, in which case it is attempted to legitimate their higher
    prices with reference to []{#Page_199 type="pagebreak" title="199"}the
    input -- that is, to the social and ecological conditions of their
    production.

    [39](#c3-note-0039a){#c3-note-0039}  This is only partially true,
    however, as more institutions (universities, for instance) have begun to
    outsource their technical infrastructure (to Google Mail, for example).
    In such cases, people are indeed being coerced, in the classical sense,
    to use these services.

    [40](#c3-note-0040a){#c3-note-0040}  Mary Madden et al., "Teens, Social
    Media and Privacy," *Pew Research Center: Internet, Science & Tech* (May
    21, 2013), online.

    [41](#c3-note-0041a){#c3-note-0041}  Meta-data are data that provide
    information about other data. In the case of an email, the header lines
    (the sender, recipient, date, subject, etc.) form the meta-data, while
    the data are made up of the actual content of communication. In
    practice, however, the two categories cannot always be sharply
    distinguished from one another.

    [42](#c3-note-0042a){#c3-note-0042}  By manipulating online polls, for
    instance, or flooding social mass media with algorithmically generated
    propaganda. See Glen Greenwald, "Hacking Online Polls and Other Ways
    British Spies Seek to Control the Internet," *The Intercept* (July 14,
    2014), online.

    [43](#c3-note-0043a){#c3-note-0043}  Jeremy Scahill and Glenn Greenwald,
    "The NSA\'s Secret Role in the US Assassination Program," *The
    Intercept* (February 10, 2014), online.

    [44](#c3-note-0044a){#c3-note-0044}  Regarding the interconnections
    between Google and the US State Department, see Julian Assange, *When
    Google Met WikiLeaks* (New York: O/R Books, 2014).

    [45](#c3-note-0045a){#c3-note-0045}  For a catalog of these
    publications, see the DARPA website:
    \<[opencatalog.darpa.mil/SMISC.html](http://opencatalog.darpa.mil/SMISC.html)\>.

    [46](#c3-note-0046a){#c3-note-0046}  See the military\'s own description
    of the project at:
    \<[minerva.dtic.mil/funded.html](http://minerva.dtic.mil/funded.html)\>.

    [47](#c3-note-0047a){#c3-note-0047}  Such is the goal stated on the
    project\'s homepage: \<\>.

    [48](#c3-note-0048a){#c3-note-0048}  Bruce Schneier, "Don\'t Listen to
    Google and Facebook: The Public--Private Surveillance Partnership Is
    Still Going Strong," *The Atlantic* (March 25, 2014), online.

    [49](#c3-note-0049a){#c3-note-0049}  See the documentary film *Low
    Definition Control* (2011), directed by Michael Palm.

    [50](#c3-note-0050a){#c3-note-0050}  Felix Stalder, "In der zweiten
    digitalen Phase: Daten versus Kommunikation," *Le Monde Diplomatique*
    (February 14, 2014), online.

    [51](#c3-note-0051a){#c3-note-0051}  In 2009, the European Parliament
    and the European Council ratified Directive 2009/72/EC, which stipulates
    that, by the year 2020, 80 percent of all households in the EU will have
    to be equipped with an intelligent metering system.[]{#Page_200
    type="pagebreak" title="200"}

    [52](#c3-note-0052a){#c3-note-0052}  There is no consensus about how or
    whether smart meters will contribute to the more efficient use of
    energy. On the contrary, one study commissioned by the German Federal
    Ministry for Economic Affairs and Energy concluded that the
    comprehensive implementation of smart metering would have negative
    economic effects for consumers. See Helmut Edelmann and Thomas Kästner,
    "Cost--Benefit Analysis for the Comprehensive Use of Smart Metering,"
    *Ernst & Young* (June 2013), online.

    [53](#c3-note-0053a){#c3-note-0053}  Quoted from "United Nations Working
    towards Urbanization," *United Nations Urbanization Agenda* (July 7,
    2015), online. For a comprehensive critique of such visions, see Adam
    Greenfield, *Against the Smart City* (New York City: Do Projects, 2013).

    [54](#c3-note-0054a){#c3-note-0054}  Stefan Selke, *Lifelogging: Warum
    wir unser Leben nicht digitalen Technologien überlassen sollten*
    (Berlin: Econ, 2014).

    [55](#c3-note-0055a){#c3-note-0055}  Rainer Schneider, "Rabatte für
    Gesundheitsdaten: Was die deutschen Krankenversicherer planen," *ZDNet*
    (December 18, 2014), online \[--trans.\].

    [56](#c3-note-0056a){#c3-note-0056}  Frank Pasquale, *The Black Box
    Society: The Secret Algorithms that Control Money and Information*
    (Cambridge, MA: Harvard University Press, 2015).

    [57](#c3-note-0057a){#c3-note-0057}  "Facebook Gives People around the
    World the Power to Publish Their Own Stories," *Facebook Help Center*
    (2017), online.

    [58](#c3-note-0058a){#c3-note-0058}  Lena Kampf et al., "Deutsche im
    NSA-Visier: Als Extremist gebrandmarkt," *Tagesschau.de* (July 3, 2014),
    online.

    [59](#c3-note-0059a){#c3-note-0059}  Florian Klenk, "Der Prozess gegen
    Josef S.," *Falter* (July 8, 2014), online.

    [60](#c3-note-0060a){#c3-note-0060}  Zygmunt Bauman, *Liquid Modernity*
    (Cambridge: Polity, 2000), p. 35.

    [61](#c3-note-0061a){#c3-note-0061}  This is so regardless of whether
    the dominant regime, eager to seem impervious to opposition, represents
    itself as the one and only alternative. See Byung-Chul Han, "Why
    Revolution Is No Longer Possible," *Transformation* (October 23, 2015),
    online.

    [62](#c3-note-0062a){#c3-note-0062}  See the *Süddeutsche Zeitung*\'s
    special website devoted to the "Offshore Leaks":
    \.

    [63](#c3-note-0063a){#c3-note-0063}  The *Süddeutsche Zeitung*\'s
    website devoted to the "Luxembourg Leaks" can be found at:
    \.

    [64](#c3-note-0064a){#c3-note-0064}  See the documentary film
    *Citizenfour* (2014), directed by Lara Poitras.

    [65](#c3-note-0065a){#c3-note-0065}  Felix Stalder, "WikiLeaks und die
    neue Ökologie der Nach­richtenmedien," in Heinrich Geiselberger (ed.),
    *WikiLeaks und die Folgen* (Berlin: Suhrkamp, 2011), pp.
    96--110.[]{#Page_201 type="pagebreak" title="201"}

    [66](#c3-note-0066a){#c3-note-0066}  Yochai Benkler, "Coase\'s Penguin,
    or, Linux and the Nature of the Firm," *Yale Law Journal* 112 (2002):
    369--446.

    [67](#c3-note-0067a){#c3-note-0067}  For an overview of the many commons
    traditions, see David Bollier and Silke Helfrich, *The Wealth of the
    Commons: A World beyond Market and State* (Amherst: Levellers Press,
    2012).

    [68](#c3-note-0068a){#c3-note-0068}  Massimo De Angelis and Stavros
    Stavrides, "On the Commons: A Public Interview," *e-flux* 17 (June
    2010), online.

    [69](#c3-note-0069a){#c3-note-0069}  Elinor Ostrom, *Governing the
    Commons: The Evolution of Institutions for Collective Action*
    (Cambridge: Cambridge University Press, 1990).

    [70](#c3-note-0070a){#c3-note-0070}  Michael McGinnis and Elinor Ostrom,
    "Design Principles for Local and Global Commons," *International
    Political Economy and International Institutions* 2 (1996): 465--93.

    [71](#c3-note-0071a){#c3-note-0071}  I say "allegedly" because the
    argument about their inevitable tragedy, which has been made without any
    empirical evidence, falsely conceives of the commons as a limited but
    fully unregulated resource. Because people are only interested in
    maximizing their own short-term benefits -- or so the conclusion goes --
    the resource will either have to be privatized or administered by the
    government in order to protect it from being over-used and to ensure the
    well-being of everyone involved. It was never taken into consideration
    that users could speak with one another and organize themselves. See
    Garrett Hardin, "The Tragedy of the Commons," *Science* 162 (1968):
    1243--8.

    [72](#c3-note-0072a){#c3-note-0072}  Jonathan Rowe, "The Real Tragedy:
    Ecological Ruin Stems from What Happens to -- Not What Is Caused by --
    the Commons," *On the Commons* (April 30, 2013), online.

    [73](#c3-note-0073a){#c3-note-0073}  James Boyle, "A Politics of
    Intellectual Property: Environmentalism for the Net?" *Duke Law Journal*
    47 (1997): 87--116.

    [74](#c3-note-0074a){#c3-note-0074}  Quoted from:
    \<[debian.org/intro/about.html](http://debian.org/intro/about.html)\>.

    [75](#c3-note-0075a){#c3-note-0075}  The Debian Social Contract can be
    read at: \<\>.

    [76](#c3-note-0076a){#c3-note-0076}  Gabriella E. Coleman and Benjamin
    Hill, "The Social Production of Ethics in Debian and Free Software
    Communities: Anthropological Lessons for Vocational Ethics," in Stefan
    Koch (ed.), *Free/Open Source Software Development* (Hershey, PA: Idea
    Group, 2005), pp. 273--95.

    [77](#c3-note-0077a){#c3-note-0077}  While it is relatively easy to
    identify the inner circle of such a project, it is impossible to
    determine the number of those who have contributed to it. This is
    because, among other reasons, the distinction between producers and
    consumers is so fluid that any firm line drawn between them for
    quantitative purposes would be entirely arbitrary. Should someone who
    writes the documentation be considered a producer of a software
    []{#Page_202 type="pagebreak" title="202"}project? To be counted as
    such, is it sufficient to report a single bug? Or to confirm the
    validity of a bug report that has already been sent? Should everyone be
    counted who has helped another person solve a problem in a forum?

    [78](#c3-note-0078a){#c3-note-0078}  Raphaël Hertzog, "The State of the
    Debian--Ubuntu Relationship" (December 6, 2010), online.

    [79](#c3-note-0079a){#c3-note-0079}  This, in any case, is the number of
    free software programs that appears in Wikipedia\'s entry titled "List
    of Text Editors." This list, however, is probably incomplete.

    [80](#c3-note-0080a){#c3-note-0080}  In this regard, the most
    significant legal changes were enacted through the Copyright Treaty of
    the World Intellectual Property Organization (1996), the US Digital
    Millennium Copyright Act (1998), and the EU guidelines for the
    harmonization of certain aspects of copyright (2001). Since 2006, a
    popular tactic in Germany and elsewhere has been to issue floods of
    cease-and-desist letters. This involves sending tens of thousands of
    semi-automatically generated threats of legal action with demands for
    payment in response to the presumably unauthorized use of
    copyright-protected material.

    [81](#c3-note-0081a){#c3-note-0081}  Examples include the Open Content
    License (1998) and the Free Art License (2000).

    [82](#c3-note-0082a){#c3-note-0082}  Benjamin Mako Hill, "Towards a
    Standard of Freedom: Creative Commons and the Free Software Movement,"
    *mako.cc* (June 29, 2005), online.

    [83](#c3-note-0083a){#c3-note-0083}  Since 2007, Wikipedia has
    continuously been one of the 10 most-used websites.

    [84](#c3-note-0084a){#c3-note-0084}  One of the best studies of
    Wikipedia remains Christian Stegbauer, *Wikipedia: Das Rätsel der
    Kooperation* (Wiesbaden: Verlag für Sozialwissenschaften, 2009).

    [85](#c3-note-0085a){#c3-note-0085}  Dan Wielsch, "Governance of Massive
    Multiauthor Collabor­ation -- Linux, Wikipedia and Other Networks:
    Governed by Bilateral Contracts, Partnerships or Something in Between?"
    *JIPITEC* 1 (2010): 96--108.

    [86](#c3-note-0086a){#c3-note-0086}  See Wikipedia\'s 2013--14
    fundraising report at:
    \<[meta.wikimedia.org/wiki/Fundraising/2013-14\_Report](http://meta.wikimedia.org/wiki/Fundraising/2013-14_Report)\>.

    [87](#c3-note-0087a){#c3-note-0087}  Roland Ramthun, "Offene Geodaten
    durch OpenStreetMap," in Ulrich Herb (ed.), *Open Initiatives: Offenheit
    in der digitalen Welt und Wissenschaft* (Saarbrücken: Universaar, 2012),
    pp. 159--84.

    [88](#c3-note-0088a){#c3-note-0088}  "Automated Edits Code of Conduct,"
    [WikiOpenStreetMap.org](http://WikiOpenStreetMap.org) (March 15, 2015),
    online.

    [89](#c3-note-0089a){#c3-note-0089}  See the information provided at:
    \<[wiki.osmfoundation.org/wiki/Finances](http://wiki.osmfoundation.org/wiki/Finances)\>.

    [90](#c3-note-0090a){#c3-note-0090}  As part of its "Knight News
    Challenge," for instance, the American Knight Foundation gave \$570,000
    in 2012 to the []{#Page_203 type="pagebreak" title="203"}company Mapbox
    in order for the latter to make improvements to OSM\'s infrastructure.

    [91](#c3-note-0091a){#c3-note-0091}  This was accomplished, for
    instance, by introducing methods for data indexing and quality control.
    See Ramthum, "Offene Geodaten durch OpenStreetMap" (cited above).

    [92](#c3-note-0092a){#c3-note-0092}  Trevor Paglen and Adam C. Thompson,
    *Torture Taxi: On the Trail of the CIA\'s Rendition Flights* (Hoboken,
    NJ: Melville House, 2006).

    [93](#c3-note-0093a){#c3-note-0093}  See the project\'s website:
    \<[airqualityegg.com](http://airqualityegg.com)\>.

    [94](#c3-note-0094a){#c3-note-0094}  See the project\'s homepage:
    \<[index.okfn.org](http://index.okfn.org)\>.

    [95](#c3-note-0095a){#c3-note-0095}  The homepage of the Digital
    Openness Index can be found at: \<[do-index.org](http://do-index.org)\>.

    [96](#c3-note-0096a){#c3-note-0096}  Tildy Bayar, "Community Wind
    Arrives Stateside," *Renewable Energy World* (July 5, 2012), online.

    [97](#c3-note-0097a){#c3-note-0097}  Jeremy Rifkin, *The Zero Marginal
    Cost Society: The Internet of Things, the Collaborative Commons and the
    Eclipse of Capitalism* (New York: Palgrave Macmillan, 2014), p. 217.

    [98](#c3-note-0098a){#c3-note-0098}  See, for instance, Ludger
    Eversmann, *Post-Kapitalismus: Blueprint für die nächste Gesellschaft*
    (Hanover: Heise Zeitschriften Verlag, 2014).

    [99](#c3-note-0099a){#c3-note-0099}  Ron Amadeo, "Google\'s Iron Grip on
    Android: Controlling Open Source by Any Means Necessary," *Ars Technica*
    (October 21, 2013), online.

    [100](#c3-note-0100a){#c3-note-0100}  Seb Olma, "To Share or Not to
    Share," [nettime.org](http://nettime.org) (October 20, 2014), online.

    [101](#c3-note-0101a){#c3-note-0101}  Susie Cagle, "The Case against
    Sharing," *The Nib* (May 27, 2014), online.[]{#Page_204 type="pagebreak"
    title="204"}
    :::
    :::

    [Copyright page]{.chapterTitle} {#ffirs03}
    =
    ::: {.section}
    First published in German as *Kultur der Digitalitaet* © Suhrkamp Verlag,
    Berlin, 2016

    This English edition © Polity Press, 2018

    Polity Press

    65 Bridge Street

    Cambridge CB2 1UR, UK

    Polity Press

    101 Station Landing

    Suite 300

    Medford, MA 02155, USA

    All rights reserved. Except for the quotation of short passages for the
    purpose of criticism and review, no part of this publication may be
    reproduced, stored in a retrieval system or transmitted, in any form or
    by any means, electronic, mechanical, photocopying, recording or
    otherwise, without the prior permission of the publisher.

    P. 51, Brautigan, Richard: From "All Watched Over by Machines of Loving
    Grace" by Richard Brautigan. Copyright © 1967 by Richard Brautigan,
    renewed 1995 by Ianthe Brautigan Swenson. Reprinted with the permission
    of the Estate of Richard Brautigan; all rights reserved.

    ISBN-13: 978-1-5095-1959-0

    ISBN-13: 978-1-5095-1960-6 (pb)

    A catalogue record for this book is available from the British Library.

    Library of Congress Cataloging-in-Publication Data

    Names: Stalder, Felix, author.

    Title: The digital condition / Felix Stalder.

    Other titles: Kultur der Digitalitaet. English

    Description: Cambridge, UK ; Medford, MA : Polity Press, \[2017\] \|
    Includes bibliographical references and index.

    Identifiers: LCCN 2017024678 (print) \| LCCN 2017037573 (ebook) \| ISBN
    9781509519620 (Mobi) \| ISBN 9781509519637 (Epub) \| ISBN 9781509519590
    (hardback) \| ISBN 9781509519606 (pbk.)

    Subjects: LCSH: Digital communications--Social aspects. \| Information
    society. \| Information society--Forecasting.

    Classification: LCC HM851 (ebook) \| LCC HM851 .S728813 2017 (print) \|
    DDC 302.23/1--dc23

    LC record available at

    Typeset in 10.5 on 12 pt Sabon

    by Toppan Best-set Premedia Limited

    Printed and bound in Great Britain by CPI Group (UK) Ltd, Croydon

    The publisher has used its best endeavours to ensure that the URLs for
    external websites referred to in this book are correct and active at the
    time of going to press. However, the publisher has no responsibility for
    the websites and can make no guarantee that a site will remain live or
    that the content is or will remain appropriate.

    Every effort has been made to trace all copyright holders, but if any
    have been inadvertently overlooked the publisher will be pleased to
    include any necessary credits in any subsequent reprint or edition.

    For further information on Polity, visit our website:
    politybooks.com[]{#Page_iv type="pagebreak" title="iv"}
    :::
     

    Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.