Dekker & Barok
Copying as a Way to Start Something New A Conversation with Dusan Barok about Monoskop
2017
COPYING AS A WAY TO START SOMETHING NEW
A Conversation with Dusan Barok about Monoskop
Annet Dekker
Dusan Barok is an artist, writer, and cultural activist involved
in critical practice in the fields of software, art, and theory. After founding and organizing the online culture portal
Koridor in Slovakia from 1999–2002, in 2003 he co-founded
the BURUNDI media lab where he organized the Translab
evening series. A year later, the first ideas about building an
online platform for texts and media started to emerge and
Monoskop became a reality. More than a decade later, Barok
is well-known as the main editor of Monoskop. In 2016, he
began a PhD research project at the University of Amsterdam. His project, titled Database for the Documentation of
Contemporary Art, investigates art databases as discursive
platforms that provide context for artworks. In an extended
email exchange, we discuss the possibilities and restraints
of an online ‘archive’.
ANNET DEKKER
You started Monoskop in 2004, already some time ago. What
does the name mean?
DUSAN BAROK
‘Monoskop’ is the Slovak equivalent of the English ‘monoscope’, which means an electric tube used in analogue TV
broadcasting to produce images of test cards, station logotypes, error messages but also for calibrating cameras. Monoscopes were automatized television announcers designed to
speak to both live and machine audiences about the status
of a channel, broadcasting purely phatic messages.
AD
Can you explain why you wanted to do the project and how it
developed to what it is now? In other words, what were your
main aims and have they changed? If so, in which direction
and what caused these changes?
DB
I began Monoskop as one of the strands of the BURUNDI
media lab in Bratislava. Originally, it was designed as a wiki
website for documenting media art and culture in the eastern part of Europe, whose backbone consisted of city entries
composed of links to separate pages about various events,
212
LOST AND LIVING (IN) ARCHIVES
initiatives, and individuals. In the early days it was modelled
on Wikipedia (which had been running for two years when
Monoskop started) and contained biographies and descriptions of events from a kind of neutral point of view. Over
the years, the geographic and thematic boundaries have
gradually expanded to embrace the arts and humanities in
their widest sense, focusing primarily on lesser-known
1
phenomena.1 Perhaps the biggest change is the ongoing
See for example
shift from mapping people, events, and places towards
https://monoskop.org/
Features. Accessed
synthesizing discourses.
28 May 2016.
A turning point occurred during my studies at the
Piet Zwart Institute, in the Networked Media programme
from 2010–2012, which combined art, design, software,
and theory with support in the philosophy of open source
and prototyping. While there, I was researching aspects of
the networked condition and how it transforms knowledge,
sociality and economics: I wrote research papers on leaking
as a technique of knowledge production, a critique of the
social graph, and on the libertarian values embedded in the
design of digital currencies. I was ready for more practice.
When Aymeric Mansoux, one of the tutors, encouraged me
to develop my then side-project Monoskop into a graduation
work, the timing was good.
The website got its own domain, a redesign, and most
crucially, the Monoskop wiki was restructured from its
2
focus on media art and culture towards the much wider
https://monoskop.org/
embrace
of the arts and humanities. It turned to a media
Symposium. Accessed
28 May 2016.
library of sorts. The graduation work also consisted of
a symposium about personal collecting and media ar3
chiving,2 which saw its loose follow-ups on media aeshttps://monoskop.org/
thetics (in Bergen)3 and on knowledge classification and
The_Extensions_of_
Many. Accessed
archives (in Mons)4 last year.
28 May 2016.
AD
https://monoskop.org/
Ideographies_of_
Knowledge. Accessed
28 May 2016.
Did you have a background in library studies, or have
you taken their ideas/methods of systemization and categorization (meta data)? If not, what are your methods
and how did you develop them?
213
COPYING AS A WAY TO START SOMETHING NEW
4
been an interesting process, clearly showing the influence
of a changing back-end system. Are you interested in the
idea of sharing and circulating texts as a new way not just
of accessing and distributing but perhaps also of production—and publishing? I’m thinking how Aaaaarg started as
a way to share and exchange ideas about a text. In what
way do you think Monoskop plays (or could play) with these
kinds of mechanisms? Do you think it brings out a new
potential in publishing?
DB
Besides the standard literature in information science (I
have a degree in information technologies), I read some
works of documentation scientists Paul Otlet and Suzanne
Briet, historians such as W. Boyd Rayward and Ronald E.
Day, as well as translated writings of Michel Pêcheux and
other French discourse analysts of the 1960s and 1970s.
This interest was triggered in late 2014 by the confluence
of Femke’s Mondotheque project and an invitation to be an
artist-in-residence in Mons in Belgium at the Mundaneum,
home to Paul Otlet’s recently restored archive.
This led me to identify three tropes of organizing and
navigating written records, which has guided my thinking
about libraries and research ever since: class, reference,
and index. Classification entails tree-like structuring, such
as faceting the meanings of words and expressions, and
developing classification systems for libraries. Referencing
stands for citations, hyperlinking and bibliographies. Indexing ranges from the listing of occurrences of selected terms
to an ‘absolute’ index of all terms, enabling full-text search.
With this in mind, I have done a number of experiments.
There is an index of selected persons and terms from
5
across the Monoskop wiki and Log.5 There is a growing
https://monoskop.org/
list of wiki entries with bibliographies and institutional
Index. Accessed
28 May 2016.
infrastructures of fields and theories in the humanities.6
There is a lexicon aggregating entries from some ten
6
dictionaries of the humanities into a single page with
https://monoskop.org/
hyperlinks to each full entry (unpublished). There is an
Humanities. Accessed
28 May 2016.
alternative interface to the Monoskop Log, in which entries are navigated solely through a tag cloud acting as
a multidimensional filter (unpublished). There is a reader
containing some fifty books whose mutual references are
turned into hyperlinks, and whose main interface consists
of terms specific to each text, generated through tf-idf algorithm (unpublished). And so on.
DB
The publishing market frames the publication as a singular
body of work, autonomous from other titles on offer, and
subjects it to the rules of the market—with a price tag and
copyright notice attached. But for scholars and artists, these
are rarely an issue. Most academic work is subsidized from
public sources in the first place, and many would prefer to
give their work away for free since openness attracts more
citations. Why they opt to submit to the market is for quality
editing and an increase of their own symbolic value in direct
proportion to the ranking of their publishing house. This
is not dissimilar from the music industry. And indeed, for
many the goal is to compose chants that would gain popularity across academia and get their place in the popular
imagination.
On the other hand, besides providing access, digital
libraries are also fit to provide context by treating publications as a corpus of texts that can be accessed through an
unlimited number of interfaces designed with an understanding of the functionality of databases and an openness
to the imagination of the community of users. This can
be done by creating layers of classification, interlinking
bodies of texts through references, creating alternative
indexes of persons, things and terms, making full-text
search possible, making visual search possible—across
the whole of corpus as well as its parts, and so on. Isn’t
this what makes a difference? To be sure, websites such
as Aaaaarg and Monoskop have explored only the tip of
AD
Indeed, looking at the archive in many alternative ways has
214
LOST AND LIVING (IN) ARCHIVES
215
COPYING AS A WAY TO START SOMETHING NEW
the iceberg of possibilities. There is much more to tinker
and hack around.
within a given text and within a discourse in which it is
embedded. What is specific to digital text, however, is that
we can search it in milliseconds. Full-text search is enabled
by the index—search engines operate thanks to bots that
assign each expression a unique address and store it in a
database. In this respect, the index usually found at the
end of a printed book is something that has been automated
with the arrival of machine search.
In other words, even though knowledge in the age of the
internet is still being shaped by the departmentalization of
academia and its related procedures and rituals of discourse
production, and its modes of expression are centred around
the verbal rhetoric, the flattening effects of the index really
transformed the ways in which we come to ‘know’ things.
To ‘write’ a ‘book’ in this context is to produce a searchable
database instead.
AD
It is interesting that whilst the accessibility and search potential has radically changed, the content, a book or any other
text, is still a particular kind of thing with its own characteristics and forms. Whereas the process of writing texts seems
hard to change, would you be interested in creating more
alliances between texts to bring out new bibliographies? In
this sense, starting to produce new texts, by including other
texts and documents, like emails, visuals, audio, CD-ROMs,
or even un-published texts or manuscripts?
DB
Currently Monoskop is compiling more and more ‘source’
bibliographies, containing digital versions of actual texts
they refer to. This has been very much in focus in the past
two or three years and Monoskop is now home to hundreds
of bibliographies of twentieth-century artists, writers, groups,
and movements as well as of various theories and human7
ities disciplines.7 As the next step I would like to move
See for example
on to enabling full-text search within each such biblioghttps://monoskop.
org/Foucault,
raphy. This will make more apparent that the ‘source’
https://monoskop.
bibliography
is a form of anthology, a corpus of texts
org/Lissitzky,
https://monoskop.
representing a discourse. Another issue is to activate
org/Humanities.
cross-references
within texts—to turn page numbers in
All accessed
28 May 2016.
bibliographic citations inside texts into hyperlinks leading
to other texts.
This is to experiment further with the specificity of digital text. Which is different both to oral speech and printed
books. These can be described as three distinct yet mutually
encapsulated domains. Orality emphasizes the sequence
and narrative of an argument, in which words themselves
are imagined as constituting meaning. Specific to writing,
on the other hand, is referring to the written record; texts
are brought together by way of references, which in turn
create context, also called discourse. Statements are ‘fixed’
to paper and meaning is constituted by their contexts—both
216
LOST AND LIVING (IN) ARCHIVES
AD
So, perhaps we finally have come to ‘the death of the author’,
at least in so far as that automated mechanisms are becoming active agents in the (re)creation process. To return to
Monoskop in its current form, what choices do you make
regarding the content of the repositories, are there things
you don’t want to collect, or wish you could but have not
been able to?
DB
In a sense, I turned to a wiki and started Monoskop as
a way to keep track of my reading and browsing. It is a
by-product of a succession of my interests, obsessions, and
digressions. That it is publicly accessible is a consequence
of the fact that paper notebooks, text files kept offline and
private wikis proved to be inadequate at the moment when I
needed to quickly find notes from reading some text earlier.
It is not perfect, but it solved the issue of immediate access
and retrieval. Plus there is a bonus of having the body of
my past ten or twelve years of reading mutually interlinked
and searchable. An interesting outcome is that these ‘notes’
are public—one is motivated to formulate and frame them
217
COPYING AS A WAY TO START SOMETHING NEW
as to be readable and useful for others as well. A similar
difference is between writing an entry in a personal diary
and writing a blog post. That is also why the autonomy
of technical infrastructure is so important here. Posting
research notes on Facebook may increase one’s visibility
among peers, but the ‘terms of service’ say explicitly that
anything can be deleted by administrators at any time,
without any reason. I ‘collect’ things that I wish to be able
to return to, to remember, or to recollect easily.
AD
Can you describe the process, how do you get the books,
already digitized, or do you do a lot yourself? In other words,
could you describe the (technical) process and organizational aspects of the project?
DB
In the beginning, I spent a lot of time exploring other digital
libraries which served as sources for most of the entries on
Log (Gigapedia, Libgen, Aaaaarg, Bibliotik, Scribd, Issuu,
Karagarga, Google filetype:pdf). Later I started corresponding with a number of people from around the world (NYC,
Rotterdam, Buenos Aires, Boulder, Berlin, Ploiesti, etc.) who
contribute scans and links to scans on an irregular basis.
Out-of-print and open-access titles often come directly from
authors and publishers. Many artists’ books and magazines
were scraped or downloaded through URL manipulation
from online collections of museums, archives and libraries.
Needless to say, my offline archive is much bigger than
what is on Monoskop. I tend to put online the files I prefer
not to lose. The web is the best backup solution I have
found so far.
The Monoskop wiki is open for everyone to edit; any user
can upload their own works or scans and many do. Many of
those who spent more time working on the website ended up
being my friends. And many of my friends ended up having
an account as well :). For everyone else, there is no record
kept about what one downloaded, what one read and for
how long... we don’t care, we don’t track.
218
LOST AND LIVING (IN) ARCHIVES
AD
In what way has the larger (free) publishing context changed
your project, there are currently several free texts sharing
initiatives around (some already before you started like Textz.
com or Aaaaarg), how do you collaborate, or distinguish
from each other?
DB
It should not be an overstatement to say that while in the
previous decade Monoskop was shaped primarily by the
‘media culture’ milieu which it intended to document, the
branching out of its repository of highlighted publications
Monoskop Log in 2009, and the broadening of its focus to
also include the whole of the twentieth and twenty-first
century situates it more firmly in the context of online
archives, and especially digital libraries.
I only got to know others in this milieu later. I approached
Sean Dockray in 2010, Marcell Mars approached me the
following year, and then in 2013 he introduced me to Kenneth Goldsmith. We are in steady contact, especially through
public events hosted by various cultural centres and galleries.
The first large one was held at Ljubljana’s hackerspace Kiberpipa in 2012. Later came the conferences and workshops
organized by Kuda at a youth centre in Novi Sad (2013), by
the Institute of Network Cultures at WORM, Rotterdam (2014),
WKV and Akademie Schloss Solitude in Stuttgart (2014),
Mama & Nova Gallery in Zagreb (2015), ECC at Mundaneum,
Mons (2015), and most recently by the Media Department
8
of the University of Malmo (2016).8
For more information see,
The leitmotif of all these events was the digital library
https://monoskop.org/
Digital_libraries#
and their atmosphere can be described as the spirit of
Workshops_and_
early
hacker culture that eventually left the walls of a
conferences.
Accessed 28 May 2016.
computer lab. Only rarely there have been professional
librarians, archivists, and publishers among the speakers, even though the voices represented were quite diverse.
To name just the more frequent participants... Marcell
and Tom Medak (Memory of the World) advocate universal
access to knowledge informed by the positions of the Yugoslav
219
COPYING AS A WAY TO START SOMETHING NEW
Marxist school Praxis; Sean’s work is critical of the militarization and commercialization of the university (in the
context of which Aaaaarg will always come as secondary, as
an extension of The Public School in Los Angeles); Kenneth
aims to revive the literary avant-garde while standing on the
shoulders of his heroes documented on UbuWeb; Sebastian
Lütgert and Jan Berger are the most serious software developers among us, while their projects such as Textz.com and
Pad.ma should be read against critical theory and Situationist cinema; Femke Snelting has initiated the collaborative
research-publication Mondotheque about the legacy of the
early twentieth century Brussels-born information scientist
Paul Otlet, triggered by the attempt of Google to rebrand him
as the father of the internet.
I have been trying to identify implications of the digital-networked textuality for knowledge production, including humanities research, while speaking from the position
of a cultural worker who spent his formative years in the
former Eastern Bloc, experiencing freedom as that of unprecedented access to information via the internet following
the fall of Berlin Wall. In this respect, Monoskop is a way
to bring into ‘archival consciousness’ what the East had
missed out during the Cold War. And also more generally,
what the non-West had missed out in the polarized world,
and vice versa, what was invisible in the formal Western
cultural canons.
There have been several attempts to develop new projects,
and the collaborative efforts have materialized in shared
infrastructure and introductions of new features in respective platforms, such as PDF reader and full-text search on
Aaaaarg. Marcell and Tom along with their collaborators have
been steadily developing the Memory of the World library and
Sebastian resuscitated Textz.com. Besides that, there are
overlaps in titles hosted in each library, and Monoskop bibliographies extensively link to scans on Libgen and Aaaaarg,
while artists’ profiles on the website link to audio and video
recordings on UbuWeb.
220
LOST AND LIVING (IN) ARCHIVES
AD
It is interesting to hear that there weren’t any archivist or
professional librarians involved (yet), what is your position
towards these professional and institutional entities and
persons?
DB
As the recent example of Sci-Hub showed, in the age of
digital networks, for many researchers libraries are primarily free proxies to corporate repositories of academic
9
journals.9 Their other emerging role is that of a digital
For more information see,
repository of works in the public domain (the role piowww.sciencemag.org/
news/2016/04/whosneered in the United States by Project Gutenberg and
downloading-piratedInternet Archive). There have been too many attempts
papers-everyone.
Accessed 28 May 2016.
to transpose librarians’ techniques from the paperbound
world into the digital domain. Yet, as I said before, there
is much more to explore. Perhaps the most exciting inventive approaches can be found in the field of classics, for
example in the Perseus Digital Library & Catalog and the
Homer Multitext Project. Perseus combines digital editions
of ancient literary works with multiple lexical tools in a way
that even a non-professional can check and verify a disputable translation of a quote. Something that is hard to
imagine being possible in print.
AD
I think it is interesting to see how Monoskop and other
repositories like it have gained different constituencies
globally, for one you can see the kind of shift in the texts
being put up. From the start you tried to bring in a strong
‘eastern European voice’, nevertheless at the moment the
content of the repository reflects a very western perspective on critical theory, what are your future goals. And do
you think it would be possible to include other voices? For
example, have you ever considered the possibility of users
uploading and editing texts themselves?
DB
The site certainly started with the primary focus on east-central European media art and culture, which I considered
221
COPYING AS A WAY TO START SOMETHING NEW
myself to be part of in the early 2000s. I was naive enough
to attempt to make a book on the theme between 2008–2010.
During that period I came to notice the ambivalence of the
notion of medium in an art-historical and technological
sense (thanks to Florian Cramer). My understanding of
media art was that it is an art specific to its medium, very
much in Greenbergian terms, extended to the more recent
‘developments’, which were supposed to range from neo-geometrical painting through video art to net art.
At the same time, I implicitly understood art in the sense
of ‘expanded arts’, as employed by the Fluxus in the early
1960s—objects as well as events that go beyond the (academic) separation between the arts to include music, film,
poetry, dance, design, publishing, etc., which in turn made
me also consider such phenomena as experimental film,
electro-acoustic music and concrete poetry.
Add to it the geopolitically unstable notion of East-Central
Europe and the striking lack of research in this area and
all you end up with is a headache. It took me a while to
realize that there’s no point even attempting to write a coherent narrative of the history of media-specific expanded
arts of East-Central Europe of the past hundred years. I
ended up with a wiki page outlining the supposed mile10
stones along with a bibliography.10
https://monoskop.
For this strand, the wiki served as the main notebook,
org/CEE. Accessed
28 May 2016. And
leaving behind hundreds of wiki entries. The Log was
https://monoskop.
more or less a ‘log’ of my research path and the presence
org/Central_and_
Eastern_Europe_
of ‘western’ theory is to a certain extent a by-product of
Bibliography.
my search for a methodology and theoretical references.
Accessed 28 May 2016.
As an indirect outcome, a new wiki section was
launched recently. Instead of writing a history of mediaspecific ‘expanded arts’ in one corner of the world, it takes
a somewhat different approach. Not a sequential text, not
even an anthology, it is an online single-page annotated
index, a ‘meta-encyclopaedia’ of art movements and styles,
intended to offer an expansion of the art-historical canonical
prioritization of the western painterly-sculptural tradition
222
LOST AND LIVING (IN) ARCHIVES
11
https://monoskop.
org/Art. Accessed
28 May 2016.
to also include other artists and movements around the
world.11
AD
Can you say something about the longevity of the project?
You briefly mentioned before that the web was your best
backup solution. Yet, it is of course known that websites
and databases require a lot of maintenance, so what will
happen to the type of files that you offer? More and more
voices are saying that, for example, the PDF format is all
but stable. How do you deal with such challenges?
DB
Surely, in the realm of bits, nothing is designed to last
forever. Uncritical adoption of Flash had turned out to be
perhaps the worst tragedy so far. But while there certainly
were more sane alternatives if one was OK with renouncing its emblematic visual effects and aesthetics that went
with it, with PDF it is harder. There are EPUBs, but scholarly publications are simply unthinkable without page
numbers that are not supported in this format. Another
challenge the EPUB faces is from artists' books and other
design- and layout-conscious publications—its simplified
HTML format does not match the range of possibilities for
typography and layout one is used to from designing for
paper. Another open-source solution, PNG tarballs, is not
a viable alternative for sharing books.
The main schism between PDF and HTML is that one represents the domain of print (easily portable, and with fixed
page size), while the other the domain of web (embedded
within it by hyperlinks pointing both directions, and with
flexible page size). EPUB is developed with the intention of
synthetizing both of them into a single format, but instead
it reduces them into a third container, which is doomed to
reinvent the whole thing once again.
It is unlikely that there will appear an ultimate convertor
between PDF and HTML, simply because of the specificities
of print and the web and the fact that they overlap only in
some respects. Monoskop tends to provide HTML formats
223
COPYING AS A WAY TO START SOMETHING NEW
next to PDFs where time allows. And if the PDF were to
suddenly be doomed, there would be a big conversion party.
On the side of audio and video, most media files on
Monoskop are in open formats—OGG and WEBM. There
are many other challenges: keeping up-to-date with PHP
and MySQL development, with the MediaWiki software
and its numerous extensions, and the mysterious ICANN
organization that controls the web domain.
as an imperative to us to embrace redundancy, to promote
spreading their contents across as many nodes and sites
as anyone wishes. We may look at copying not as merely
mirroring or making backups, but opening up for possibilities to start new libraries, new platforms, new databases.
That is how these came about as well. Let there be Zzzzzrgs,
Ůbuwebs and Multiskops.
AD
What were your biggest challenges beside technical ones?
For example, have you ever been in trouble regarding copyright issues, or if not, how would you deal with such a
situation?
DB
Monoskop operates on the assumption of making transformative use of the collected material. The fact of bringing
it into certain new contexts, in which it can be accessed,
viewed and interpreted, adds something that bookstores
don’t provide. Time will show whether this can be understood as fair use. It is an opt-out model and it proves to
be working well so far. Takedowns are rare, and if they are
legitimate, we comply.
AD
Perhaps related to this question, what is your experience
with users engagement? I remember Sean (from Aaaaarg,
in conversation with Matthew Fuller, Mute 2011) saying
that some people mirror or download the whole site, not
so much in an attempt to ‘have everything’ but as a way
to make sure that the content remains accessible. It is a
conscious decision because one knows that one day everything might be taken down. This is of course particularly
pertinent, especially since while we’re doing this interview
Sean and Marcell are being sued by a Canadian publisher.
DB
That is absolutely true and any of these websites can disappear any time. Archives like Aaaaarg, Monoskop or UbuWeb
are created by makers rather than guardians and it comes
224
LOST AND LIVING (IN) ARCHIVES
225
COPYING AS A WAY TO START SOMETHING NEW
Bibliography
Fuller, Matthew. ‘In the Paradise of Too Many Books: An Interview with
Sean Dockray’. Mute, 4 May 2011. www.metamute.org/editorial/
articles/paradise-too-many-books-interview-seandockray. Accessed 31 May 2016.
Online digital libraries
Aaaaarg, http://aaaaarg.fail.
Bibliotik, https://bibliotik.me.
Issuu, https://issuu.com.
Karagarga, https://karagarga.in.
Library Genesis / LibGen, http://gen.lib.rus.ec.
Memory of the World, https://library.memoryoftheworld.org.
Monoskop, https://monoskop.org.
Pad.ma, https://pad.ma.
Scribd, https://scribd.com.
Textz.com, https://textz.com.
UbuWeb, www.ubu.com.
Elbakyan
Why Science is Better with Communism The Case of Sci-Hub transcript and translation
2016
# Transcript and translation of Sci-Hub presentation
_The University of North Texas 's [Open Access Symposium
2016](/symposium/2016/) included [a presentation via Skype by Alexandra
Elbakyan](/symposium/2016/why-science-better-communism-case-sci-hub), the
founder of Sci-Hub. [Elbakyan's
slides](http://digital.library.unt.edu/ark:/67531/metadc850001/) (and those of
other presenters) have been archived in the UNT Digital Library, and [video of
this presentation](https://youtu.be/hr7v5FF5c8M) (and others) is now available
on YouTube and soon in the UNT Digital Library._
_The presentation was entitled "Why Science is Better with Communism? The Case
of Sci-Hub." Below is an edited transcript of the presentation produced by
Regina Anikina and Kevin Hawkins, with a translation by Kevin Hawkins and Anna
Pechenina._
**Martin Halbert** : We have a recent addition to our lineup of speakers that
we'll start off the day with: Alexandra Elbakyan. As many of you know,
Alexandra is a Kazakhstani graduate student, computer programmer, and the
creator of the controversial Sci-Hub site. The New York Times has compared her
to Edward Snowden for leaking information and because she avoids American law,
but Ars Technica has compared her to Aaron Swartz--so a controversial figure.
We thought it was very important to include her in the dialog about open
access because we want, in this symposium series, to include all the different
perspectives on copyright, intellectual property, open access, and access to
scholarly information. So I'm delighted that we're actually able to have her
here via Skype to present.
---
**Alexandra Elbakyan** : First of all, thank you for inviting me to share my
views. My name is Alexandra. As you might have guessed, I represent the site
Sci-Hub. It was founded in 2011 and immediately became popular among the local
community, almost immediately began providing access to about 40 articles an
hour and now providing more than 200,000.
It has to be said that over the course of the site's development it was
strongly supported by donations, and when for various reasons we had to
suspend the service, there were many displeased users who clamored for the
project to return so that the work in their laboratory could continue.
This is the case not just in poor countries; I can say that in rich countries
the public also doesn't have access to scholarly articles. And not all
universities have subscriptions to those resources that are required for
research.
A few of our users insisted that we start charging users, for example, by
allowing one or two articles to be downloaded for free but charging for more,
so that the service would be supported by those who really need it. But I
didn't end up doing that because the goal of the resource is knowledge for
all.
Certain open-access advocates criticize the site, saying that what we really
need is for articles to be in open access from the start, by changing the
business models of publishers. I can respond by saying that the goal of the
project is first and foremost the dissemination of scholarly knowledge in
society, and we have to work in the conditions we find ourselves in. Of
course, if scholarly publishers had a different business model, then perhaps
this project wouldn't be necessary. We can also imagine that if humans had
wings, we wouldn't need airplanes. But in any case we need to fly, so we make
airplanes.
Scholarly publishers quickly dubbed the work of Sci-Hub as piracy. Admittedly
Sci-Hub violates the laws of copyright, but copyright is related to the rights
of intellectual property. That is, scholarly articles are the property of
publishers, and reading them for free turns out to be something like theft
according to the current law.
The concept of intellectual property itself is not new, although it can seem
otherwise. The history of copyright goes back to around the 18th century,
although the first mentions of something similar can be found in the Talmud.
It's just that recently copyright has been found at the center of passionate
debate since some are trying to forbid the free distribution of information in
the internet.
However, the central focus of the debate is on censorship and privacy. The
defense of intellectual property in the internet requires censorship of
websites, and that is consequently a violation of freedom of speech. This also
raises a question of interference in private life - that is, when the
government in some way monitors users who violate copyright. In principle this
is also an intrusion in communication.
However, the very essence of copyright - that is, the concept of intellectual
property - is almost never questioned. That is, whether knowledge can be
someone's property is rarely discussed.
However, our ancestors were even more daring. They did not just question
intellectual property but property in general. That is, there are works in
which we can find the appearance of the idea of communism. There's Thomas
More's _Utopia_ from the 16th century, but actually such works arose much
earlier, even in Ancient Greece where these questions were already been
discussed in 391 BCE.
If we look at the slogans of communism, we see that one of the core concepts
is the struggle against inequality, the revolt of the suppressed classes,
whose members don't have any power against those who have concentrated basic
resources and power in their hands, with the goal of redistributing these
resources.
We can see that even today there is a certain informational inequality, when,
for example, only students and employees of the most wealthy universities have
full access to scholarly information, while access can be completely lacking
for institutions at the next lower tier and for the general public.
An idea arises: if there isn't private property, then there's no basis for
unequal distribution of wealth. In our case as well: if there's no private
intellectual property and all scholarly publications are nationalized, then
all people will have equal access to knowledge.
However, a question arises: if there is no private property, then what can
stimulate a person to work? One of the ideas is that under communism, rather
than greed or aspiration for wealth being a stimulus for work, a person would
aspire to self-development and learning for the betterment of the world.
Even if such values can't be applied to society as a whole, they at least work
in the world of scholarship. Therefore in the Soviet Union there was a true
cult of science - statues were even erected to the glory of science - and
perhaps thanks to this our country was one of the first to go into space.
However, it's one thing to have a revolution, when there's a mass
redistribution of property in society, but an act of theft is another thing.
This, of course, is not yet a revolution, but it's a small protest against the
property rights and the unequal distribution of wealth. Theft as protest has
always been welcomed and approved of in all eras of society. For example, we
all know about Robin Hood, but there have actually been quite a few noble
bandits in history. I've listed just a few of them.
I think that if the state works well, then accordingly it has a working tax
system and a certain system of redistribution of wealth, and then,
accordingly, there's no cause for revolution, for example. But if for some
reasons the state works poorly, then people begin to solve the problem for
themselves. In this way, Sci-Hub is an appropriate response to the inequality
that has arisen due to lack of access to information.
Pictured is Aldar Köse, a Kazakh folk hero who used his cunning to deceive
wealthy beys and take possession of their property. It's interesting to note
that beys are always depicted as greedy and stupid. And if you look at what's
written in the blogosphere today about scholarly publishers, you can find
these same characteristics.
There's also the interesting figure of the ancient Greek god Hermes, the
patron of thieves. That is, theft was a sufficiently respected activity that
it had its own god.
There's a researcher named Norman Brown who wrote an academic work called
_Hermes the Thief: The Evolution of a Myth_. It turns out that this myth is
related to a certain revolution in ancient Greek society, when the lower
classes, which lacked property, began to rise up.
For example, the poet Theognis of Megara wrote that "those who were nothing
became everything" and vice versa. This is essentially one of the most well-
known communist slogans.
For the ancient Greeks this was related, again as Brown says, to the
appearance of trade. Trade was identified with theft. There was no clear
distinction between the exchange of legal and illegal goods - that is, trade
was just as much considered theft as what we call piracy today.
Why did it turn out this way? Because Hermes was originally a god of
boundaries and transitions. Therefore, we can think that property is related
to keeping something within boundaries. At the same time, the things that
Hermes protected - theft, trade and communication - are related to boundary-
crossing.
If we think about scholarly journals, then any journal is first of all a means
of communication, and therefore it's apparent that keeping journals in closed
access contradicts the essence of what they were intended for.
This is, of course, not even the most interesting thing.
Hermes actually evolved - that is, while he was once an intellectual deity, he
later came to be interpreted as the same as Thoth, the Egyptian god of
knowledge, and further came to oversee such things as astrology, alchemy, and
magic - that is, the things from which, you might say, contemporary sciences
arose. So we can say that contemporary science arose from theft.
Of course, someone can object, saying that contemporary science is very
different from esoterica, such as astrology and alchemy, but if we look at the
history of science, we see that contemporary science differs from the ancient
arts in the former being more open.
That is, when the movement towards greater openness appeared, contemporary
science also appeared. Once again this is not an argument in support of
scholarly publishers.
Indeed, in the cultural consciousness science and the process of learning have
always been closely associated with theft, beginning with the legend of Adam
and Eve and the forbidden tree, which is called simply "the tree of
knowledge." And it's interesting that Elsevier's logo depicts some kind of
tree, which, accordingly, raises associations with this tree in the Garden of
Eden - the tree of knowledge - from which it was forbidden to eat the fruit.
Likewise we can recall the well-known legend of Prometheus, a part of our
cultural consciousness, who stole some knowledge and brought it to humans.
Once again we see the connection between science and theft.
Nowadays, many scholars have described science as the knowledge of secrets.
However, if we look closely, we have to ask: what is a secret? A secret is
something private, in essence private property. Accordingly, the disclosure of
the secret signifies that it ceases to be property. Once again we see the
contradiction between scholarship and property rights.
We can recall Robert Merton, who studied research institutes and revealed four
basic ethical norms that in his opinion are important for their successful
functioning. One of them is communism - that is, knowledge is shared.
Accordingly, if we look at certain traditional communities, then we find that
those communities that function within a caste system (dividing people by
occupation) usually turn out to have certain castes of people with
intellectual occupations, and if you look at the ethical norms of such castes,
you find that they are also communistic. You can find this, for example, in
Plato. Or even if you look at India, you find the accumulation of wealth is
usually the occupation of another caste.
To sum up, we have the following take-aways. Science, as a part of culture, is
in conflict with private property. Accordingly, scholarly communication is a
dual conflict. What open access is doing is returning science to its essential
roots.
**Audience question** : I'm a former university press director. I'd just like
to point out also that "property is theft" is the watchword of French
anarchism, a famous phrase from Pierre-Joseph Proudhon, so perhaps anarchism
and science are also inseparable. But my main question really has to do with a
challenge that a librarian named Rick Anderson posted on the Scholarly Kitchen
blog two days ago, and that has to do with the fact that evidently Sci-Hub
relies a lot on the access codes that faculty have given to Sci-Hub in one way
or another so that Sci-Hub can gain access to the electronic materials that it
then uses to post on its own site. What Anderson does is points out that if
that information falls into the wrong hands, there are all sorts of terrible
things that can be done because those access codes provide access to personal
information, to student data, to all sorts of other things that could be badly
misused, so my question to you is what assurances can you give us that that
kind of information will not fall into the wrong hands.
**Elbakyan** : Well, first of all I doubt that it's possible to gain access to
all the information that is listed in the post on the Scholarly Kitchen. As a
rule, these logins and passwords can only be used for access to the proxy
server through which you can download articles, whereas for access to other
things, such as email, the login and password won't work. [ _Audience reacts
with skepticism._ ]
**Audience question** : Earlier this week a number of us participated in a
panel presentation on scholarly publishing and social justice, and one of the
primary points that came out of that was that the people who create the
published product - not necessarily the scientist but the people who actually
do the work that results in the published product - deserve to be paid for
their labor, and there is definitely labor involved. So if you're replacing
the market for these publications and eliminating these people's opportunities
to make money, where is the appropriate distribution of wealth.
**Elbakyan** : First of all, we shouldn't confuse the compensation that a
person receives for their labor with the excessive profits that publishers
wring out by limiting access to information. For example, Sci-Hub also does a
fair amount of work and has high expenses, but these expenses are for some
reason covered by donations - that is, there's no need to close access to
information - that is, it's a red herring to say that if articles are
distributed for free, people won't have anything to eat. One does not follow
from the other. In my opinion, though, an optimal system for funding would
consist of grants, donations, and membership fees.
**Audience question** : You've spoken so far exclusively about Sci-Hub. I
wonder if you could comment just briefly on LibGen and whether you see the two
models as identical or whether there are any material differences between
LibGen and Sci-Hub.
**Elbakyan** : Well, LibGen is primarily a repository. It doesn't download
new articles but is more aimed at preserving that which has already been
downloaded.
Over the last few monsoons I lived with the dread that the rain would
eventually find its ways through my leaky terrace roof and destroy my books.
Last August my fears came true when I woke up in the middle of the night to
see my room flooded and water leaking from the roof and through the walls.
Much of the night was spent rescuing the books and shifting them to a dry
room. While timing and speed were essential to the task at hand they were also
the key hazards navigating a slippery floor with books perched till one’s
neck. At the end of the rescue mission, I sat alone, exhausted amongst a
mountain of books assessing the damage that had been done, but also having
found books I had forgotten or had not seen in years; books which I had
thought had been permanently borrowed by others or misplaced found their way
back as I set many aside in a kind of ritual of renewed commitment.
Sorting the badly damaged from the mildly wet, I could not help but think
about the fragile histories of books from the library of Alexandria to the
great Florence flood of 1966. It may have seemed presumptuous to move from the
precarity of one’s small library and collection to these larger events, but is
there any other way in which one experiences earth-shattering events if not
via a microcosmic filtering through one’s own experiences? I sent a distressed
email to a friend Sandeep a committed bibliophile and book collector with a
fantastic personal library, who had also been responsible for many of my new
acquisitions. He wrote back on August 17, and I quote an extract of the email:
> Dear Lawrence
>
> I hope your books are fine. I feel for you very deeply, since my nightmares
about the future all contain as a key image my books rotting away under a
steady drip of grey water. Where was this leak, in the old house or in the
new? I spent some time looking at the books themselves: many of them I greeted
like old friends. I see you have Lewis Hyde’s _Trickster Makes the World_ and
Edward Rice’s _Captain Sir Richard Francis Burton_ in the pile: both top-class
books. (Burton is a bit of an obsession with me. The man did and saw
everything there was to do and see, and thought about it all, and wrote it all
down in a massive pile of notes and manuscripts. He squirrelled a fraction of
his scholarship into the tremendous footnotes to the Thousand and One Nights,
but most of it he could not publish without scandalising the Victorians, and
then he died, and his widow made a bonfire in the backyard, and burnt
everything because she disapproved of these products of a lifetime’s labors,
and of a lifetime such as few have ever had, and no one can ever have again. I
almost hope there is a special hell for Isabel Burton to burn in.)
Moving from one’s personal pile to the burning of the work of one of the
greatest autodidacts of the nineteenth century and back it was strangely
comforting to be reminded that libraries—the greatest of time machines
invented—were testimonies to both the grandeur and the fragility of
civilizations. Whenever I enter huge libraries it is with a tingling sense of
excitement normally reserved for horror movies, but at the same time this same
sense of awe is often accompanied by an almost debilitating sense of what it
means to encounter finitude as it is dwarfed by centuries of words and
scholarship. Yet strangely when I think of libraries it is rarely the New York
public library that comes to mind even as I wish that we could have similar
institutions in India. I think instead of much smaller collections—sometimes
of institutions but often just those of friends and acquaintances. I enjoy
browsing through people’s bookshelves, not just to discern their reading
preferences or to discover for myself unknown treasures, but also to take
delight in the local logic of their library, their spatial preferences and to
understand the order of things not as a global knowledge project but as a
personal, often quirky rationale.
[ ](//images.e-flux-systems.com/2012_09_library-of-congress.jpg,2000 "Machine
room for book transportation at the Library of Congress, early 20th century.")
Machine room for book transportation at the Library of Congress, early 20th
century.
Like romantic love, bibliophilia is perhaps shaped by one’s first love. The
first library that I knew intimately was a little six by eight foot shop
hidden in a by-lane off one of the busiest roads in Bangalore, Commercial
street. From its name to what it contained, Mecca stores could well have been
transported out of an Arabian nights tale. One side of the store was lined
with plastic ware and kitchen utensils of every shape and size while the other
wall was piled with books, comics, and magazines. From my eight-year-old
perspective it seemed large enough to contain all the knowledge of the world.
I earned a weekly stipend packing noodles for an hour every day after school
in the home shop that my parents ran, which I used to either borrow or buy
second hand books from the store. I was usually done with them by Sunday and
would have them reread by Wednesday. The real anguish came in waiting from
Wednesday to Friday for the next set. After finally acquiring a small
collection of books and comics myself I decided—spurred on by a fatal
combination of entrepreneurial enthusiasm and a pedantic desire to educate
others—to start a small library myself. Packing my books into a small aluminum
case and armed with a makeshift ledger, I went from house to house convincing
children in the neighborhood to forgo twenty-five paisa in exchange for a book
or comic with an additional caveat that they were not to share them with any
of their friends. While the enterprise got off to a reasonable start it soon
met its end when I realized that despite my instructions, my friends were
generously sharing the comics after they were done with them, which thereby
ended my biblioempire ambitions.
Over the past few years the explosion of ebook readers and consequent rise in
the availability of pirated books have opened new worlds to my booklust.
[Library.nu](library.nu), which began as gigapedia, suddenly made the idea of
the universal library seem like reality. By the time it shut down in February
2012 the library had close to a million books and over half a million active
users. Bibliophiles across the world were distraught when the site was shut
down and if it were ever possible to experience what the burning of the
library of Alexandria must have felt it was that collective ache of seeing the
closure of [library.nu.](library.nu)
What brings together something as monumental as the New York public library, a
collective enterprise like [library.nu](library.nu) and Mecca stores if not
the word library? As spaces they may have little in common but as virtual
spaces they speak as equals even if the scale of their imagination may differ.
All of them partake of their share in the world of logotopias. In an
exhibition designed to celebrate the place of the library in art, architecture
and imagination the curator Sascha Hastings coined the term logotopia to
designate “word places”—a happy coincidence of architecture and language.
There is however a risk of flattening the differences between these spaces by
classifying them all under a single utopian ideal of the library. Imagination
after all has a geography and physiology and requires our alertness to these
distinctions. Lets think instead of an entire pantheon (both of spaces as well
as practices) that we can designate as shadow libraries (or shadow logotopias
if you like) which exist in the shadows cast by the long history of monumental
libraries. While they are often dwarfed by the idea of the library, like the
shadows cast by our bodies, sometimes these shadows surge ahead of the body.
[ ](//images.e-flux-systems.com/2012_09_london-blitz-WEB.jpg,2000 "The London
Library after the Blitz, c. 1940.")
The London Library after the Blitz, c. 1940.
At the heart of all libraries lies a myth—that of the burning of the library
of Alexandria. No one knows what the library of Alexandria looked like or
possesses an accurate list of its contents. What we have long known though is
a sense of loss. But a loss of what? Of all the forms of knowledge in the
world in a particular time. Because that was precisely what the library of
Alexandria sought to collect under its roofs. It is believed that in order to
succeed in assembling a universal library, King Ptolemy I wrote “to all the
sovereigns and governors on earth” begging them to send to him every kind of
book by every kind of author, “poets and prose-writers, rhetoricians and
sophists, doctors and soothsayers, historians, and all others too.” The king’s
scholars had calculated that five hundred thousand scrolls would be required
if they were to collect in Alexandria “all the books of all the peoples of the
world.”1
What was special about the Library of Alexandria was the fact that until then
the libraries of the ancient world were either private collections of an
individual or government storehouses where legal and literary documents were
kept for official reference. By imagining a space where the public could have
access to all the knowledge of the world, the library also expressed a new
idea of the human itself. While the library of Alexandria is rightfully
celebrated, what is often forgotten in the mourning of its demise is another
library—one that existed in the shadows of the grand library but whose
whereabouts ensured that it survived Caesar’s papyrus destroying flames.
According to the Sicilian historian Diodorus Siculus, writing in the first
century BC, Alexandria boasted a second library, the so-called daughter
library, intended for the use of scholars not affiliated with the Museion. It
was situated in the south-western neighborhood of Alexandria, close to the
temple of Serapis, and was stocked with duplicate copies of the Museion
library’s holdings. This shadow library survived the fire that destroyed the
primary library of Alexandria but has since been eclipsed by the latter’s
myth.
Alberto Manguel says that if the library of Alexandria stood tall as an
expression of universal ambitions, there is another structure that haunts our
imagination: the tower of Babel. If the library attempted to conquer time, the
tower sought to vanquish space. He says “The Tower of Babel in space and the
Library of Alexandria in time are the twin symbols of these ambitions. In
their shadow, my small library is a reminder of both impossible yearnings—the
desire to contain all the tongues of Babel and the longing to possess all the
volumes of Alexandria.”2 Writing about the two failed projects Manguel adds
that when seen within the limiting frame of the real, the one exists only as
nebulous reality and the other as an unsuccessful if ambitious real estate
enterprise. But seen as myths, and in the imagination at night, the solidity
of both buildings for him is unimpeachable.3
The utopian ideal of the universal library was more than a question of built
up form or space or even the possibility of storing all of the knowledge of
the world; its real aspiration was in the illusion of order that it could
impose on a chaotic world where the lines drawn by a fine hairbrush
distinguished the world of animals from men, fairies from ghosts, science from
magic, and Europe from Japan. In some cases even after the physical structure
that housed the books had crumbled and the books had been reduced to dust the
ideal remained in the form of the order imagined for the library. One such
residual evidence comes to us by way of the _Pandectae_ —a comprehensive
bibliography created by Conrad Gesner in 1545 when he feared that the Ottoman
conquerors would destroy all the books in Europe. He created a bibliography
from which the library could be built again—an all embracing index which
contained a systematic organization of twenty principal groups with a matrix
like structure that contained 30,000 concepts.4
It is not surprising that Alberto Manguel would attempt write a literary,
historical and personal history of the library. As a seventeen-year-old man in
Buenos Aries, Manguel read for the blind seer Jorge Luis Borges who once
imagined in his appropriately named story—The Tower of Babel—paradise as a
kind of library. Modifying his mentor’s statement in what can be understood as
a gesture to the inevitable demands of the real and yet acknowledging the
possible pleasures of living in shadows, Manguel asserts that sometimes
paradise must adapt itself to suit circumstantial requirements. Similarly
Jacques Rancière writing about the libraries of the working class in the
eighteenth century tells us about Gauny a joiner and a boy in love with
vagrancy and botany who decides to build a library for himself. For the sons
of the poor proletarians living in Saint Marcel district, libraries were built
only a page at a time. He learnt to read by tracing the pages on which his
mother bought her lentils and would be disappointed whenever he came to the
end of a page and the next page was not available, even though he urged his
mother to buy her lentils from the same grocer. 5
[ ](//images.e-flux-systems.com/2012_09_DGF-D-Tropics-detail-hi-res-
WEB.jpg,2000 "Dominique Gonzalez-Foerster, Chronotopes & Dioramas , 2009.
Diorama installation at The Hispanic Society of America, New York.")
Dominique Gonzalez-Foerster, _Chronotopes & Dioramas_, 2009. Diorama
installation at The Hispanic Society of America, New York.
Is the utopian ideal of the universal library as exemplified by the library of
Alexandria or modernist pedagogic institutions of the twentieth century
adequate to the task of describing the space of the shadow library, or do we
need a different account of these other spaces? In an era of the ebook reader
where the line between a book and a library is blurred, the very idea of a
library is up for grabs. It has taken me well over two decades to build a
collection of a few thousand books while around two hundred thousand books
exist as bits and bytes on my computer. Admittedly hard drives crash and data
is lost, but is that the same threat as those of rain or fire? Which then is
my library and which its shadow? Or in the spirit of logotopias would it be
more appropriate to ask the spatial question: where is the library?
If the possibility of having 200,000 books on one’s computer feels staggering
here is an even more startling statistic. The Library of Congress which is the
largest library in the world with holdings of approximately thirty million
books, which would—if they were piled on the floor—cover 364 kilometers could
potentially fit into an SD card. It is estimated that by 2030 an ordinary SD
card will have the capacity of storing up to 64 TB and assuming each book were
digitized at an average size of 1MB it would technically be possible to fit
two Libraries of Congress in one’s pocket.
It sounds like science fiction, but isn’t it the case that much of the science
fiction of a decade ago finds itself comfortably within the weaves of everyday
life. How do we make sense of the future of the library? While it may be
tempting to throw our hands up in boggled perplexity about what it means to be
able to have thirty million books lets face it: the point of libraries have
never been that you will finish what’s there. Anyone with even a modest book
collection will testify to the impossibility of ever finishing their library
and if anything at all the library stands precisely at the cusp of our
finitude and our infinity. Perhaps that is what Borges—the consummate mixer of
time and space—meant when he described paradise as a library, not as a spatial
idea but a temporal one: that it was only within the confines of infinity that
one imagine finishing reading one’s library. It would therefore be more
interesting to think of the shadow library as a way of thinking about what it
means to dwell in knowledge. While all our aspirations for a habitat should
have a utopian element to them, lets face it, utopias have always been
difficult spaces to live in.
In contrast to the idea of utopia is heterotopia—a term with its origins in
medicine (referring to an organ of the body that had been dislodged from its
usual space) and popularized by Michel Foucault both in terms of language as
well as a spatial metaphor. If utopia exists as a nowhere or imaginary space
with no connection to any existing social spaces, then heterotopias in
contrast are realities that exist and are even foundational, but in which all
other spaces are potentially inverted and contested. A mirror for instance is
simultaneously a utopia (placeless place) even as it exists in reality. But
from the standpoint of the mirror you discover your absence as well. Foucault
remarks, “The mirror functions as a heterotopia in this respect: it makes this
place that I occupy at the moment when I look at myself in the glass at once
absolutely real, connected with all the space that surrounds it, and
absolutely unreal, since in order to be perceived it has to pass through this
virtual point which is over there.”6
In _The Order of Things_ Foucault sought to investigate the conceptual space
which makes the order of knowledge possible; in his famed reading of Borges’s
Chinese encyclopedia he argues that the impossibility involved in the
encyclopedia consists less in the fantastical status of the animals and their
coexistence with real animals such as (d) sucking pigs and (e) sirens, but in
where they coexist and what “transgresses the boundaries of all imagination,
of all possible thought, is simply that alphabetical series (a, b, c, d) which
links each of those categories to all the others.” 7 Heterotopias destabilize
the ground from which we build order and in doing so reframe the very
epistemic basis of how we know.
Foucault later developed a greater spatial understanding of heterotopias in
which he uses specific examples such as the cemetery (at once the space of the
familiar since everyone has someone in the cemetery and at the heart of the
city but also over a period of time the other city, where each family
possesses its dark resting place).8 Indeed, the paradox of heterotopias is
that they are both separate from yet connected to all other spaces. This
connectedness is precisely what builds contestation into heterotopias.
Imaginary spaces such as utopias exist completely outside of order.
Heteretopias by virtue of their connectedness become sites in which epistemes
collide and overlap. They bring together heterogeneous collections of unusual
things without allowing them a unity or order established through resemblance.
Instead, their ordering is derived from a process of similitude that produces,
in an almost magical, uncertain space, monstrous combinations that unsettle
the flow of discourse.
If the utopian ideal of the library was to bring together everything that we
know of the world then the length of its bookshelves was coterminous with the
breadth of the world. But like its predecessors in Alexandria and Babel the
project is destined to be incomplete haunted by what it necessarily leaves out
and misses. The library as heterotopia reveals itself only through the
interstices and lays bare the fiction of any possibility of a coherent ground
on which a knowledge project can be built. Finally there is the question of
where we stand once the grounds that we stand on itself has been dislodged.
The answer from my first foray into the tiny six by eight foot Mecca store to
the innumerable hours spent on [ library.nu]( library.nu) remains the same:
the heterotopic pleasure of our finite selves in infinity.
×
This essay is a part of a work I am doing for an exhibition curated by Raqs
Media Collective, Sarai Reader 09. The show began on August 19, 2012, with a
deceptively empty space containing only the proposal, with ideas for the
artworks to come over a period of nine months. See .
**Lawrence Liang** is a researcher and writer based at the Alternative Law
Forum, Bangalore. His work lies at the intersection of law and cultural
politics, and has in recent years been looking at question of media piracy. He
is currently finish a book on law and justice in Hindi cinema.
It is hard to avoid the feeling these days that the future is behind us. It’s
not so much that time has stopped, but rather that the sense of promise and
purpose that once drove historical progress has become impossible to sustain.
On the one hand, the faith in modernist, nationalist, or universalist utopias
continues to retreat, while on the other, a more immediate crisis of faith has
accompanied the widespread sense of diminishing economic prospects felt in so
many places. Not to mention...
Over the last few monsoons I lived with the dread that the rain would
eventually find its ways through my leaky terrace roof and destroy my books.
Last August my fears came true when I woke up in the middle of the night to
see my room flooded and water leaking from the roof and through the walls.
Much of the night was spent rescuing the books and shifting them to a dry
room. While timing and speed were essential to the task at hand they were also
the key hazards navigating a slippery floor...
Metahaven
## [Captives of the Cloud: Part I](/journal/37/61232/captives-of-the-cloud-
part-i/)
We are the voluntary prisoners of the cloud; we are being watched over by
governments we did not elect. Wael Ghonim, Google's Egyptian executive, said:
“If you want to liberate a society just give them the internet.” 1 But how
does one liberate a society that already has the internet? In a society
permanently connected through pervasive broadband networks, the shared
internet is, bit by bit and piece by piece, overshadowed by the “cloud.” The
Coming of the Cloud The cloud,...
Amelia Groom
## [There’s Nothing to See Here: Erasing the
Monochrome](/journal/37/61233/there-s-nothing-to-see-here-erasing-the-
monochrome/)
There was once a typist from Texas named Bette Nesmith Graham, who wasn’t very
good at her job. In 1951 she started erasing her typing mistakes with a white
tempera paint solution she mixed in her kitchen blender. She called her
invention Mistake Out and began distributing small green bottles of it to her
coworkers. In 1956 she founded the delectably named Mistake Out Company.
Shortly after, she was apparently fired from her typist job because she made a
“mistake” that she failed to cover...
Nato Thompson
## [The Last Pictures: Interview with Trevor Paglen](/journal/37/61238/the-
last-pictures-interview-with-trevor-paglen/)
In 1963 NASA launched the first communications satellite, Syncom 2, into a
geosynchronous orbit over the Atlantic Ocean. Since then, humans have slowly
and methodically added to this space-based communications infrastructure.
Currently, more than 800 spacecraft in geosynchronous orbit form a man-made
ring of satellites around Earth at an altitude of 36,000 kilometers. Most of
these spacecraft powered down long ago, yet continue to float aimlessly around
the planet. Geostationary satellites...
Claire Tancons
## [Carnival to Commons: Pussy Riot, Punk Protest, and the Exercise of
Democratic Culture](/journal/37/61239/carnival-to-commons-pussy-riot-punk-
protest-and-the-exercise-of-democratic-culture/)
Once again, the press has dismissed a popular movement as carnival—this time
not Occupy Wall Street, but the anti-Putin protests. On March 1, 2012, in a
Financial Times article titled “Carnival spirit is not enough to change
Russia,” Konstantin von Eggert wrote, “One cannot sustain [the movement] on
carnival spirit alone.” 1 A little over a week later, Reuters sought to close
the debate with an article by Alissa de Carbonnel, in which she announced,
“The carnival is over for Russia’s...
Anton Vidokle and Brian Kuan Wood
## [Breaking the Contract](/journal/37/61241/breaking-the-contract/)
1\. The Contract The Duchampian revolution leads not to the liberation of the
artist from work, but to his or her proletarization via alienated construction
and transportation work. In fact, contemporary art institutions no longer need
an artist as a traditional producer. Rather, today the artist is more often
hired for a certain period of time as a worker to realize this or that
institutional project. — Boris Groys 1 When his readymades entered the space
of art, Duchamp...
Shadow Libraries
There is nothing related.
Conversations - Shadow Libraries
Conversations
[Join the Conversation](http://conversations.e-flux.com/t/5546)
e-flux conversations is a discussion platform for e-flux readers. Click to
start a discussion of the article above.
Start the Conversation
Notes - Shadow Libraries
1
Esther Shipman and Sascha Hastings eds., _Logotopia: The Library in
Architecture Art and the Imagination,_ (Cambridge Galleries: Abc Art Books
Canada, 2008).
Go to Text
2
Alberto Manguel, “My Library” in Hastings and Shipman eds. _Logotopia, The
Library in Art and Architecture and the Imagination, (Cambridge Galleries: ABC
Art Books Canada, 2008)._
Go to Text
3
Alberto Manguel, _The Library at Night_ , (Yale University Press 2009).
Go to Text
4
Ray Hastings and Esther Shipman, eds. _Logotopia: The Library in Architecture
Art and the Imagination_. Cambridge Galleries / ABC Art Books Canada, 2008.
Go to Text
5
Jacques Rancière, _The Nights of Labour: The Workers’ Dream in Nineteenth
Century France,_ (Philadelphia: Temple University Press, 1991).
Go to Text
6
Michel Foucault, “Different Spaces,” in _Aesthetics, Method, Epistemology_ ,
ed. James D. Faubion (New York: The New Press, 1998), 179; For Foucault on
language and heterotopias see _The Order of Things: An Archaeology of the
Human Sciences,_ (New York: Pantheon, 1970).
Go to Text
7
Ibid, xv.
Go to Text
8
In Foucault, “Different Spaces,” which was presented as a lecture to the
_Architecture Studies Circle_ in 1967, a few years after the writing of _The
Order of Things_.
Go to Text
Esther Shipman and Sascha Hastings eds., _Logotopia: The Library in
Architecture Art and the Imagination,_ (Cambridge Galleries: Abc Art Books
Canada, 2008).
Alberto Manguel, “My Library” in Hastings and Shipman eds. _Logotopia, The
Library in Art and Architecture and the Imagination, (Cambridge Galleries: ABC
Art Books Canada, 2008)._
Alberto Manguel, _The Library at Night_ , (Yale University Press 2009).
Ray Hastings and Esther Shipman, eds. _Logotopia: The Library in Architecture
Art and the Imagination_. Cambridge Galleries / ABC Art Books Canada, 2008.
Jacques Rancière, _The Nights of Labour: The Workers’ Dream in Nineteenth
Century France,_ (Philadelphia: Temple University Press, 1991).
Michel Foucault, “Different Spaces,” in _Aesthetics, Method, Epistemology_ ,
ed. James D. Faubion (New York: The New Press, 1998), 179; For Foucault on
language and heterotopias see _The Order of Things: An Archaeology of the
Human Sciences,_ (New York: Pantheon, 1970).
Ibid, xv.
In Foucault, “Different Spaces,” which was presented as a lecture to the
_Architecture Studies Circle_ in 1967, a few years after the writing of _The
Order of Things_.
The Surplus of Copying
How Shadow Libraries and Pirate Archives Contribute to the
Creation of Cultural Memory and the Commons
By Cornelia Sollfrank
Digital artworks tend to have a problematic relationship with the white
cube—in particular, when they are intended and optimized for online
distribution. While curators and exhibition-makers usually try to avoid
showing such works altogether, or at least aim at enhancing their sculptural
qualities to make them more presentable, the exhibition _Top Tens_ featured an
abundance of web quality digital artworks, thus placing emphasis on the very
media condition of such digital artifacts. The exhibition took place at the
Onassis Cultural Center in Athens in March 2018 and was part of the larger
festival _Shadow Libraries: UbuWeb in Athens_ ,1 an event to introduce the
online archive UbuWeb2 to the Greek audience and discuss related cultural,
ethical, technical, and legal issues. This text takes the event—and the
exhibition in particular—as a starting point for a closer look at UbuWeb and
the role an artistic approach can play in building cultural memory within the
neoliberal knowledge economy.
_UbuWeb—The Cultural Memory of the Avant-Garde_
Since Kenneth Goldsmith started Ubu in 1997 the site has become a major point
of reference for anyone interested in exploring twentieth-century avant-garde
art. The online archive provides free and unrestricted access to a remarkable
collection of thousands of artworks—among them almost 700 films and videos,
over 1000 sound art pieces, dozens of filmed dance productions, an
overwhelming amount of visual poetry and conceptual writing, critical
documents, but also musical scores, patents, electronic music resources, plus
an edition of vital new literature, the /ubu editions. Ubu contextualizes the
archived objects within curated sections and also provides framing academic
essays. Although it is a project run by Goldsmith without a budget, it has
built a reputation for making all the things available one would not find
elsewhere. The focus on “avant-garde” may seem a bit pretentious at first, but
when you look closer at the project, its operator and the philosophy behind
it, it becomes obvious how much sense this designation makes. Understanding
the history of the twentieth-century avant-garde as “a history of subversive
takes on creativity, originality, and authorship,”3 such spirit is not only
reflected in terms of the archive’s contents but also in terms of the project
as a whole. Theoretical statements by Goldsmith in which he questions concepts
such as authorship, originality, and creativity support this thesis4—and with
that a conflictual relationship with the notion of intellectual property is
preprogrammed. Therefore it comes as no surprise that the increasing
popularity of the project goes hand-in-hand with a growing discussion about
its ethical justification.
At the heart of Ubu, there is the copy! Every item in the archive is a digital
copy, either of another digital item or, in fact, it is the digitized version
of an analog object.5 That is to say, the creation of a digital collection is
inevitably based on copying the desired archive records and storing them on
dedicated media. However, making a copy is in itself a copyright-relevant act,
if the respective item is an original creation and as such protected under
copyright law.6 Hence, “any reproduction of a copyrighted work infringes the
copyright of the author or the corresponding rights of use of the copyright
holder”.7 Whether the existence of an artwork within the Ubu collection is a
case of copyright infringement varies with each individual case and depends on
the legal status of the respective work, but also on the way the rights
holders decide to act. As with all civil law, there is no judge without a
plaintiff, which means even if there is no express consent by the rights
holders, the work can remain in the archive as long as there is no request for
removal.8 Its status, however, is precarious. We find ourselves in the
notorious gray zone of copyright law where nothing is clear and many things
are possible—until somebody decides to challenge this status. Exploring the
borders of this experimental playground involves risk-taking, but, at the same
time, it is the only way to preserve existing freedoms and make a case for
changing cultural needs, which have not been considered in current legal
settings. And as the 20 years of Ubu’s existence demonstrate, the practice may
be experimental and precarious, but with growing cultural relevance and
reputation it is also gaining in stability.
_Fair Use and Public Interest_
At all public appearances and public presentations Goldsmith and his
supporters emphasize the educational character of the project and its non-
commercial orientation.9 Such a characterization is clearly intended to take
the wind out of the sails of its critics from the start and to shift the
attention away from the notion of piracy and toward questions of public
interest and the common good.
From a cultural point of view, the project unquestionably is of inestimable
value; a legal defense, however, would be a difficult undertaking. Copyright
law, in fact, has a built-in opening, the so-called copyright exceptions or
fair use regulations. They vary according to national law and cultural
traditions and allow for the use of copyrighted works under certain, defined
provisions without permission of the owner. The exceptions basically apply to
the areas of research and private study (both non-commercial), education,
review, and criticism and are described through general guidelines. “These
defences exist in order to restore the balance between the rights of the owner
of copyright and the rights of society at large.”10
A very powerful provision in most legislations is the permission to make
“private copies”, digital and analog ones, in small numbers, but they are
limited to non-commercial and non-public use, and passing on to a third party
is also excluded.11 As Ubu is an online archive that makes all of its records
publicly accessible and, not least, also provides templates for further
copying, it exceeds the notion of a “private copy” by far. Regarding further
fair use provisions, the four factors that are considered in a decision-making
process in US copyright provisions, for instance, refer to: 1) the purpose and
character of the use, including whether such use is of a commercial nature or
is for non-profit educational purposes; 2) the nature of the copyrighted work;
3) the amount and substantiality of the portion used in relation to the
copyrighted work as a whole; and 4) the effect of the use upon the potential
market for the value of the copyrighted work (US Copyright Act, 1976, 17 USC.
§107, online, n.pag.). Applying these fair use provisions to Ubu, one might
consider that the main purposes of the archive relate to education and
research, that it is by its very nature non-commercial, and it largely does
not collide with any third party business interests as most of the material is
not commercially available. However, proving this in detail would be quite an
endeavor. And what complicates matters even more is that the archival material
largely consists of original works of art, which are subject to strict
copyright law protection, that all the works have been copied without any
transformative or commenting intention, and last but not least, that the
aspect of the appropriateness of the amount of used material becomes absurd
with reference to an archive whose quality largely depends on
comprehensiveness: the more the merrier. As Simon Stokes points out, legally
binding decisions can only be made on a case-by-case basis, which is why it is
difficult to make a general evaluation of Ubu’s legal situation.12 The ethical
defense tends to induce the cultural value of the archive as a whole and its
invaluable contribution to cultural memory, while the legal situation does not
consider the value of the project as a whole and necessitates breaking it down
into all the individual items within the collection.
This very brief, when not abridged discussion of the possibilities of fair use
already demonstrates how complex it would be to apply them to Ubu. How
pointless it would be to attempt a serious legal discussion for such a
privately run archive becomes even clearer when looking at the problems public
libraries and archives have to face. While in theory such official
institutions may even have a public mission to collect, preserve, and archive
digital material, in practice, copyright law largely prevents the execution of
this task, as Steinhauer explains.13 The legal expert introduces the example
of the German National Library, which was assigned the task since 2006 to make
back-up copies of all websites published within the .de sublevel domain, but
it turned out to be illegal.14 Identifying a deficiently legal situation when
it comes to collecting, archiving, and providing access to digital cultural
goods, Steinhauer even speaks of a “legal obligation to amnesia”.15 And it is
particularly striking that, from a legal perspective, the collecting of
digitalia is more strictly regulated than the collecting of books, for
example, where the property status of the material object comes into play.
Given the imbalance between cultural requirements, copyright law, and the
technical possibilities, it is not surprising that private initiatives are
being founded with the aim to collect and preserve cultural memory. These
initiatives make use of the affordability and availability of digital
technology and its infrastructures, and they take responsibility for the
preservation of cultural goods by simply ignoring copyright induced
restrictions, i.e. opposing the insatiable hunger of the IP regime for
control.
_Shadow Libraries_
Ubu was presented and discussed in Athens at an event titled _Shadow
Libraries: UbuWeb in Athens_ , thereby making clear reference to the ecosystem
of shadow libraries. A library, in general, is an institution that collects,
orders, and makes published information available while taking into account
archival, economic, and synoptic aspects. A shadow library does exactly the
same thing, but its mission is not an official one. Usually, the
infrastructure of shadow libraries is conceived, built, and run by a private
initiative, an individual, or a small group of people, who often prefer to
remain anonymous for obvious reasons. In terms of the media content provided,
most shadow libraries are peer-produced in the sense that they are based on
the contributions of a community of supporters, sometimes referred to as
“amateur librarians”. The two key attributes of any proper library, according
to Amsterdam-based media scholar Bodo Balazs, are the catalog and the
community: “The catalogue does not just organize the knowledge stored in the
collection; it is not just a tool of searching and browsing. It is a critical
component in the organisation of the community of librarians who preserve and
nourish the collection.”16 What is specific about shadow libraries, however,
is the fact that they make available anything their contributors consider to
be relevant—regardless of its legal status. That is to say, shadow libraries
also provide unauthorized access to copyrighted publications, and they make
the material available for download without charge and without any other
restrictions. And because there is a whole network of shadow libraries whose
mission is “to remove all barriers in the way of science,”17 experts speak of
an ecosystem fostering free and universal access to knowledge.
The notion of the shadow library enjoyed popularity in the early 2000s when
the wide availability of digital networked media contributed to the emergence
of large-scale repositories of scientific materials, the most famous one
having been Gigapedia, which later transformed into library.nu. This project
was famous for hosting approximately 400,000 (scientific) books and journal
articles but had to be shut down in 2012 as a consequence of a series of
injunctions from powerful publishing houses. The now leading shadow library in
the field, Library Genesis (LibGen), can be considered as its even more
influential successor. As of November 2016 the database contained 25 million
documents (42 terabytes), of which 2.1 million were books, with digital copies
of scientific articles published in 27,134 journals by 1342 publishers.18 The
large majority of the digital material is of scientific and educational nature
(95%), while only 5% serves recreational purposes.19 The repository is based
on various ways of crowd-sourcing, i.e. social and technical forms of
accessing and sharing academic publications. Despite a number of legal cases
and court orders, the site is still available under various and changing
domain names.20
The related project Sci-Hub is an online service that processes requests for
pay-walled articles by providing systematic, automized, but unauthorized
backdoor access to proprietary scholarly journal databases. Users requesting
papers not present in LibGen are advised to download them through Sci-Hub; the
respective PDF files are served to users and automatically added to LibGen (if
not already present). According to _Nature_ magazine, Sci-Hub hosts around 60
million academic papers and was able to serve 75 million downloads in 2016. On
a daily basis 70,000 users access approximately 200,000 articles.
The founder of the meta library Sci-Hub is Kazakh programmer Alexandra
Elbakyan, who has been sued by large publishing houses and was convicted twice
to pay almost 20 million US$ in compensation for the losses her activities
allegedly have caused, which is why she had to go underground in Russia. For
illegally leaking millions of documents the _New York Times_ compared her to
Edward Snowden in 2016: “While she didn’t reveal state secrets, she took a
stand for the public’s right to know by providing free online access to just
about every scientific paper ever published, ranging from acoustics to
zymology.” 21 In the same year the prestigious _Nature_ magazine elected her
as one of the ten most influential people in science. 22 Unlike other
persecuted people, she went on the offensive and started to explain her
actions and motives in court documents and blog posts. Sci-Hub encourages new
ways of distributing knowledge, beyond any commercial interests. It provides a
radically open infrastructure thus creating an inviting atmosphere. “It is a
knowledge infrastructure that can be freely accessed, used and built upon by
anyone.”23
As both projects LibGen and Sci-Hub are based in post-Soviet countries, Balazs
reconstructed the history and spirit of Russian reading culture and brings
them into connection.24 Interestingly, the author also establishes a
connection to the Kolhoz (Russian: колхо́з), an early Soviet collective farm
model that was self-governing, community-owned, and a collaborative
enterprise, which he considers to be a major inspiration for the digital
librarians. He also identifies parallels between this Kolhoz model and the
notion of the “commons”—a concept that will be discussed in more detail with
regards to shadow libraries further below.
According to Balazs, these sorts of libraries and collections are part of the
Guerilla Open Access movement (GOA) and thus practical manifestations of Aaron
Swartz’s “Guerilla Open Access Manifesto”.25 In this manifesto the American
hacker and activist pointed out the flaws of open access politics and aimed at
recruiting supporters for the idea of “radical” open access. Radical in this
context means to completely ignore copyright and simply make as much
information available as possible. “Information is power” is how the manifesto
begins. Basically, it addresses the—what he calls—“privileged”, in the sense
that they do have access to information as academic staff or librarians, and
he calls on their support for building a system of freely available
information by using their privilege, downloading and making information
available. Swartz and Elbakyan both have become the “iconic leaders”26 of a
global movement that fights for scientific knowledge to be(come) freely
accessible and whose protagonists usually prefer to operate unrecognized.
While their particular projects may be of a more or less temporary nature, the
discursive value of the work of the “amateur librarians” and their projects
will have a lasting impact on the development of access politics.
_Cultural and Knowledge Commons_
The above discussion illustrates that the phenomenon of shadow libraries
cannot be reduced to its copyright infringing aspects. It needs to be
contextualized within a larger sociopolitical debate that situates the demand
for free and unrestricted access to knowledge within the struggle against the
all-co-opting logic of capital, which currently aims to economize all aspects
of life.
In his analysis of the Russian shadow libraries Balazs has drawn a parallel to
the commons as an alternative mode of ownership and a collective way of
dealing with resources. The growing interest in the discourses around the
commons demonstrates the urgency and timeliness of this concept. The
structural definition of the commons conceived by political economist Massimo
de Angelis allows for its application in diverse fields: “Commons are social
systems in which resources are pooled by a community of people who also govern
these resources to guarantee the latter’s sustainability (if they are natural
resources) and the reproduction of the community. These people engage in
‘commoning,’ that is a form of social labour that bears a direct relation to
the needs of the people, or the commoners”.27 While the model originates in
historical ways of sharing natural resources, it has gained new momentum in
relation to very different resources, thus constituting a third paradigm of
production—beyond state and private—however, with all commoning activities
today still being embedded in the surrounding economic system.
As a reason for the newly aroused interest in the commons, de Angelis provides
the crisis of global capital, which has maneuvered itself into a systemic
impasse. While constantly expanding through its inherent logic of growth and
accumulation, it is the very same logic that destroys the two systems capital
relies on: non-market-shaped social reproduction and the ecological system.
Within this scenario de Angelis describes capital as being in need of the
commons as a “fix” for the most urgent systemic failures: “It needs a ‘commons
fix,’ especially in order to deal with the devastation of the social fabric as
a result of the current crisis of reproduction. Since neoliberalism is not
about to give up its management of the world, it will most likely have to ask
the commons to help manage the devastation it creates. And this means: if the
commons are not there, capital will have to promote them somehow.”28
This rather surprising entanglement of capital and the commons, however, is
not the only perspective. Commons, at the same time, have the potential to
create “a social basis for alternative ways of articulating social production,
independent from capital and its prerogatives. Indeed, today it is difficult
to conceive emancipation from capital—and achieving new solutions to the
demands of _buen vivir_ , social and ecological justice—without at the same
time organizing on the terrain of commons, the non-commodified systems of
social production. Commons are not just a ‘third way’ beyond state and market
failures; they are a vehicle for emerging communities of struggle to claim
ownership to their own conditions of life and reproduction.”29 It is their
purpose to satisfy people’s basic needs and empower them by providing access
to alternative means of subsistence. In that sense, commons can be understood
as an _experimental zone_ in which participants can learn to negotiate
responsibilities, social relations, and peer-based means of production.
_Art and Commons_
Projects such as UbuWeb, Monoskop,30 aaaaarg,31 Memory of the World,32 and
0xdb33 vary in size, they have different forms of organization and foci, but
they all care for specific cultural goods and make sure these goods remain
widely accessible—be it digital copies of artworks and original documents,
books and other text formats, videos, film, or sound and music. Unlike the
large shadow libraries introduced above, which aim to provide access to
hundreds of thousands, if not millions of mainly academic papers and books,
thus trying to fully cover the world of scholarly and academic works, the
smaller artist-run projects are of different nature. While UbuWeb’s founder,
for instance, also promotes a generally unrestricted access to cultural goods,
his approach with UbuWeb is to build a curated archive with copies of artworks
that he considers to be relevant for his very context.34 The selection is
based on personal assessment and preference and cared for affectionately.
Despite its comprehensiveness, it still can be considered a “personal website”
on which the artist shares things relevant to him. As such, he is in good
company with similar “artist-run shadow libraries”, which all provide a
technical infrastructure with which they share resources, while the resources
are of specific relevance to their providers.
Just like the large pirate libraries, these artistic archiving and library
practices challenge the notion of culture as private property and remind us
that it is not an unquestionable absolute. As Jonathan Lethem contends,
“[culture] rather is a social negotiation, tenuously forged, endlessly
revised, and imperfect in its every incarnation.”35 Shadow libraries, in
general, are symptomatic of the cultural battles and absurdities around access
and copyright within an economic logic that artificially tries to limit the
abundance of digital culture, in which sharing does not mean dividing but
rather multiplying. They have become a cultural force, one that can be
represented in Foucauldian terms, as symptomatic of broader power struggles as
well as systemic failures inherent in the cultural formation. As Marczewska
puts it, “Goldsmith moves away from thinking about models of cultural
production in proprietary terms and toward paradigms of creativity based on a
culture of collecting, organizing, curating, and sharing content.”36 And by
doing so, he produces major contradictions, or rather he allows the already
existing contradictions to come to light. The artistic archives and libraries
are precarious in terms of their legal status, while it is exactly due to
their disregard of copyright that cultural resources could be built that
exceed the relevance of most official archives that are bound to abide the
law. In fact, there are no comparable official resources, which is why the
function of these projects is at least twofold: education and preservation.37
Maybe UbuWeb and the other, smaller or larger, shadow libraries do not qualify
as commons in the strict sense of involving not only a non-market exchange of
goods but also a community of commoners who negotiate the terms of use among
themselves. This would require collective, formalized, and transparent types
of organization. Furthermore, most of the digital items they circulate are
privately owned and therefore cannot simply be transferred to become commons
resources. These projects, in many respects, are in a preliminary stage by
pointing to the _ideal of culture as a commons_. By providing access to
cultural goods and knowledge that would otherwise not be available at all or
inaccessible for large parts of the general public, they might even fulfill
the function of a “commons fix”, to a certain degree, but at the same time
they are the experimental zone needed to unlearn copyright and relearn new
ways of cultural production and dissemination beyond the property regime. In
any case, they can function as perfect entry points for the discussion and
investigation of the transformative force art can have within the current
global neoliberal knowledge society.
_Top Tens—Showcasing the Copy as an Aesthetic and Political Statement_
The exhibition _Top Tens_ provided an experimental setting to explore the
possibilities of translating the abundance of a digital archive into a “real
space”, by presenting one hundred artworks from the Ubu archive. 38 Although
all works were properly attributed in the exhibition, the artists whose works
were shown neither had a say about their participation in the exhibition nor
about the display formats. Tolerating the presence of a work in the archive is
one thing; tolerating its display in such circumstances is something else,
which might even touch upon moral rights and the integrity of the work.
However, the exhibition was not so much about the individual works on display
but the archiving condition they are subject to. So the discussion here has
nothing to do the abiding art theory question of original and copy.
Marginally, it is about the question of high-quality versus low-quality
copies. In reproducible media the value of an artwork cannot be based on its
originality any longer—the core criterion for sales and market value. This is
why many artists use the trick of high-resolution and limited edition, a kind
of distributed originality status for several authorized objects, which all
are not 100 percent original but still a bit more original than an arbitrary
unlimited edition. Leaving this whole discussion aside was a clear indication
that something else was at stake. The conceptual statement made by the
exhibition and its makers foregrounded the nature of the shadow library, which
visitors were able to experience when entering the gallery space. Instead of
viewing the artworks in the usual way—online—they had the opportunity to
physically immerse themselves in the cultural condition of proliferated acts
of copying, something that “affords their reconceptualization as a hybrid
creative-critical tool and an influential aesthetic category.”39
Appropriation and copying as longstanding methods of subversive artistic
production, where the reuse of existing material serves as a tool for
commentary, social critique, and a means of making a political statement, has
expanded here to the art of exhibition-making. The individual works serve to
illustrate a curatorial concept, thus radically shifting the avant-garde
gesture which copying used to be in the twentieth century, to breathe new life
in the “culture of collecting, organizing, curating, and sharing content.”
Organizing this conceptually concise exhibition was a brave and bold statement
by the art institution: The Onassis Cultural Centre, one of Athens’ most
prestigious cultural institutions, dared to adopt a resolutely political
stance for a—at least in juridical terms—questionable project, as Ubu lives
from the persistent denial of copyright. Neglecting the concerns of the
individual authors and artists for a moment was a necessary precondition in
order to make space for rethinking the future of cultural production.
________________
Special thanks to Eric Steinhauer and all the artists and amateur librarians
who are taking care of our cultural memory.
1 Festival program online: Onassis Cultural Centre, “Shadow Libraries: UbuWeb
in Athens,” (accessed on Sept. 30, 2018).
2 _UbuWeb_ is a massive online archive of avant-garde art created over the
last two decades by New York-based artist and writer Kenneth Goldsmith.
Website of the archive: (accessed on Sept. 30, 2018).
3 Kaja Marczewska, _This Is Not a Copy. Writing at the Iterative Turn_ (New
York: Bloomsbury Academic, 2018), 22.
4 For further reading: Kenneth Goldsmith, _Uncreative Writing: Managing
Language in the Digital Age_ (New York: Columbia University Press, 2011).
5 Many works in the archive stem from the pre-digital era, and there is no
precise knowledge of the sources where Ubu obtains its material, but it is
known that Goldsmith also digitizes a lot of material himself.
6 In German copyright law, for example, §17 and §19a grant the exclusive right
to reproduce, distribute, and make available online to the author. See also: (accessed on Sept. 30,
2018).
7 Eric Steinhauer, “Rechtspflicht zur Amnesie: Digitale Inhalte, Archive und
Urheberrecht,” _iRightsInfo_ (2013),
/rechtspflicht-zur-amnesie-digitale-inhalte-archive-und-urheberrecht/18101>
(accessed on Sept. 30, 2018).
8 In particularly severe cases of copyright infringement also state
prosecutors can become active, which in practice, however, remains the
exception. The circumstances in which criminal law must be applied are
described in §109 of German copyright law.
9 See, for example, “Shadow Libraries” for a video interview with Kenneth
Goldsmith.
10 Paul Torremans, _Intellectual Property Law_ (Oxford: Oxford University
Press, 2010), 265.
11 See also §53 para. 1–3 of the German Act on Copyright and Related Rights
(UrhG), §42 para. 4 in the Austrian UrhG, and Article 19 of Swiss Copyright
Law.
12 Simon Stokes, _Art & Copyright_ (Oxford: Hart Publishing, 2003).
13 Steinhauer, “Rechtspflicht zur Amnesie”.
14 This discrepancy between a state mandate for cultural preservation and
copyright law has only been fixed in 2018 with the introduction of a special
law, §16a DNBG.
15 Steinhauer, “Rechtspflicht zur Amnesie”.
16 Bodo Balazs, “The Genesis of Library Genesis: The Birth of a Global
Scholarly Shadow Library,” Nov. 4, 2014, _SSRN_ , , (accessed on
Sept. 30, 2018).
17 Motto of Sci-Hub: “Sci-Hub,” _Wikipedia_ ,
/Sci-Hub> (accessed on Sept. 30, 2018).
18 Guillaume Cabanac, “Bibliogifts in LibGen? A study of a text-sharing
platform driven by biblioleaks and crowdsourcing,” _Journal of the Association
for Information Science and Technology_ , 67, 4 (2016): 874–884.
19 Ibid.
20 The current address is (accessed on Sept. 30, 2018).
21 Kate Murphy, “Should All Research Papers Be Free?” _New York Times Sunday
Review_ , Mar. 12, 2016,
/should-all-research-papers-be-free.html> (accessed on Sept. 30, 2018).
22 Richard Van Noorden, “Nature’s 10,” _Nature_ , Dec. 19, 2016, (accessed on Sept. 30,
2018).
23 Bodo Balazs, “Pirates in the library – an inquiry into the guerilla open
access movement,” paper for the 8th Annual Workshop of the International
Society for the History and Theory of Intellectual Property, CREATe,
University of Glasgow, UK, July 6–8, 2016. Online available at: https
://adrien-chopin.weebly.com/uploads/2/1/7/6/21765614/2016_bodo_-_pirates.pdf
(accessed on Sept. 30, 2018).
24 Balazs, “The Genesis of Library Genesis”.
25 Aaron Swartz, “Guerilla Open Access Manifesto,” _Internet Archive_ , July
2008,
(accessed on Sept. 30, 2018).
26 Balazs, “Pirates in the library”.
27 Massimo De Angelis, “Economy, Capital and the Commons,” in: _Art,
Production and the Subject in the 21st Century_ , eds. Angela Dimitrakaki and
Kirsten Lloyd (Liverpool: Liverpool University Press, 2015), 201.
28 Ibid., 211.
29 Ibid.
30 See: (accessed on Sept. 30, 2018).
31 Accessible with invitation. See:
[https://aaaaarg.fail/](https://aaaaarg.fail) (accessed on Sept. 30, 2018).
32 See: (accessed on Sept. 30, 2018).
33 See: (accessed on Sept. 30, 2018).
34 Kenneth Goldsmith in conversation with Cornelia Sollfrank, _The Poetry of
Archiving_ , 2013, (accessed on Sept. 30, 2018).
35 Jonathan Lethem, _The Ecstasy of Influence: Nonfictions, etc._ (London:
Vintage, 2012), 101.
36 Marczewska, _This Is Not a Copy_ , 2.
37 The research project _Creating Commons_ , based at Zurich University of the
Arts, is dedicated to the potential of art projects for the creation of
commons: “creating commons,” (accessed on
Sept. 30, 2018).
38 One of Ubu’s features online has been the “top ten”, the idea to invite
guests to pick their ten favorite works from the archive and thus introduce a
mix between chance operation and subjectivity in order to reveal hidden
treasures. The curators of the festival in Athens, Ilan Manouach and Kenneth
Goldsmith, decided to elevate this principle to the curatorial concept of the
exhibition and invited ten guests to select their ten favorite works. The
Athens-based curator Elpida Karaba was commissioned to work on an adequate
concept for the realization, which turned out to be a huge black box divided
into ten small cubicles with monitors and seating areas, supplemented by a
large wall projection illuminating the whole space.
39 Marczewska, _This Is Not a Copy_ , 7.
This text is under a _Creative Commons_ license: CC BY NC SA 3.0 Austria
::: {.section}
The show had already been going on for more than three hours, but nobody
was bothered by this. Quite the contrary. The tension in the venue was
approaching its peak, and the ratings were through the roof. Throughout
all of Europe, 195 million people were watching the spectacle on
television, and the social mass media were gaining steam. On Twitter,
more than 47,000 messages were being sent every minute with the hashtag
\#Eurovision.[^1^](#f6-note-0001){#f6-note-0001a} The outcome was
decided shortly after midnight: Conchita Wurst, the bearded diva, was
announced the winner of the 2014 Eurovision Song Contest. Cheers erupted
as the public celebrated the victor -- but also itself. At long last,
there was more to the event than just another round of tacky television
programming ("This is Ljubljana calling!"). Rather, a statement was made
-- a statement in favor of tolerance and against homophobia, for
diversity and for the right to define oneself however one pleases. And
Europe sent this message in the midst of a crisis and despite ongoing
hostilities, not to mention all of the toxic rumblings that could be
heard about decadence, cultural decay, and Gayropa. Visibly moved, the
Austrian singer let out an exclamation -- "We are unity, and we are
unstoppable!" -- as she returned to the stage with wobbly knees to
accept the trophy.
With her aesthetically convincing performance, Conchita succeeded in
unleashing a strong desire for personal []{#Page_1 type="pagebreak"
title="1"}self-discovery, for community, and for overcoming stale
conventions. And she did this through a character that mainstream
society would have considered paradoxical and deviant not long ago but
has since come to understand: attractive beyond the dichotomy of man and
woman, explicitly artificial and yet entirely authentic. This peculiar
conflation of artificiality and naturalness is equally present in
Berndnaut Smilde\'s photographic work of a real indoor cloud (*Nimbus*,
2010) on the cover of this book. Conchita\'s performance was also on a
formal level seemingly paradoxical: extremely focused and completely
open. Unlike most of the other acts, she took the stage alone, and
though she hardly moved at all, she nevertheless incited the audience to
participate in numerous ways and genuinely to act out the motto of the
contest ("Join us!"). Throughout the early rounds of the competition,
the beard, which was at first so provocative, transformed into a
free-floating symbol that the public began to appropriate in various
ways. Men and women painted Conchita-like beards on their faces,
newspapers printed beards to be cut out, and fans crocheted beards. Not
only did someone Photoshop a beard on to a painting of Empress Sissi of
Austria, but King Willem-Alexander of the Netherlands even tweeted a
deceptively realistic portrait of his wife, Queen Máxima, wearing a
beard. From one of the biggest stages of all, the evening of Wurst\'s
victory conveyed an impression of how much the culture of Europe had
changed in recent years, both in terms of its content and its forms.
That which had long been restricted to subcultural niches -- the
fluidity of gender identities, appropriation as a cultural technique,
or the conflation of reception and production, for instance -- was now
part of the mainstream. Even while sitting in front of the television,
this mainstream was no longer just a private audience but rather a
multitude of singular producers whose networked activity -- on location
or on social mass media -- lent particular significance to the occasion
as a moment of collective self-perception.
It is more than half a century since Marshall McLuhan announced the end
of the Modern era, a cultural epoch that he called the Gutenberg Galaxy
in honor of the print medium by which it was so influenced. What was
once just an abstract speculation of media theory, however, now
describes []{#Page_2 type="pagebreak" title="2"}the concrete reality of
our everyday life. What\'s more, we have moved well past McLuhan\'s
diagnosis: the erosion of old cultural forms, institutions, and
certainties is not just something we affirm, but new ones have already
formed whose contours are easy to identify not only in niche sectors but
in the mainstream. Shortly before Conchita\'s triumph, Facebook thus
expanded the gender-identity options for its billion-plus users from 2
to 60. In addition to "male" and "female," users of the English version
of the site can now choose from among the following categories:
::: {.extract}
Agender, Androgyne, Androgynes, Androgynous, Asexual, Bigender, Cis, Cis
Female, Cis Male, Cis Man, Cis Woman, Cisgender, Cisgender Female,
Cisgender Male, Cisgender Man, Cisgender Woman, Female to Male (FTM),
Female to Male Trans Man, Female to Male Transgender Man, Female to Male
Transsexual Man, Gender Fluid, Gender Neutral, Gender Nonconforming,
Gender Questioning, Gender Variant, Genderqueer, Hermaphrodite,
Intersex, Intersex Man, Intersex Person, Intersex Woman, Male to Female
(MTF), Male to Female Trans Woman, Male to Female Transgender Woman,
Male to Female Transsexual Woman, Neither, Neutrois, Non-Binary, Other,
Pangender, Polygender, T\*Man, Trans, Trans Female, Trans Male, Trans
Man, Trans Person, Trans\*Female, Trans\*Male, Trans\*Man,
Trans\*Person, Trans\*Woman, Transexual, Transexual Female, Transexual
Male, Transexual Man, Transexual Person, Transexual Woman, Transgender
Female, Transgender Person, Transmasculine, T\*Woman, Two\*Person,
Two-Spirit, Two-Spirit Person.
:::
This enormous proliferation of cultural possibilities is an expression
of what I will refer to below as the digital condition. Far from being
universally welcomed, its growing presence has also instigated waves of
nostalgia, diffuse resentments, and intellectual panic. Conservative and
reactionary movements, which oppose such developments and desire to
preserve or even re-create previous conditions, have been on the rise.
Likewise in 2014, for instance, a cultural dispute broke out in normally
subdued Baden-Würtemberg over which forms of sexual partnership should
be mentioned positively in the sexual education curriculum. Its impetus
was a working paper released at the end of 2013 by the state\'s
[]{#Page_3 type="pagebreak" title="3"}Ministry of Culture. Among other
things, it proposed that adolescents "should confront their own sexual
identity and orientation \[...\] from a position of acceptance with
respect to sexual diversity."[^2^](#f6-note-0002){#f6-note-0002a} In a
short period of time, a campaign organized mainly through social mass
media collected more than 200,000 signatures in opposition to the
proposal and submitted them to the petitions committee at the state
parliament. At that point, the government responded by putting the
initiative on ice. However, according to the analysis presented in this
book, leaving it on ice creates a precarious situation.
The rise and spread of the digital condition is the result of a
wide-ranging and irreversible cultural transformation, the beginnings of
which can in part be traced back to the nineteenth century. Since the
1960s, however, this shift has accelerated enormously and has
encompassed increasingly broader spheres of social life. More and more
people have been participating in cultural processes; larger and larger
dimensions of existence have become battlegrounds for cultural disputes;
and social activity has been intertwined with increasingly complex
technologies, without which it would hardly be possible to conceive of
these processes, let alone achieve them. The number of competing
cultural projects, works, reference points, and reference systems has
been growing rapidly. This, in turn, has caused an escalating crisis for
the established forms and institutions of culture, which are poorly
equipped to deal with such an inundation of new claims to meaning. Since
roughly the year 2000, many previously independent developments have
been consolidating, gaining strength and modifying themselves to form a
new cultural constellation that encompasses broad segments of society --
a new galaxy, as McLuhan might have
said.[^3^](#f6-note-0003){#f6-note-0003a} These days it is relatively
easy to recognize the specific forms that characterize it as a whole and
how these forms have contributed to new, contradictory and
conflict-laden political dynamics.
My argument, which is restricted to cultural developments in the
(transatlantic) West, is divided into three chapters. In the first, I
will outline the *historical* developments that have given rise to this
quantitative and qualitative change and have led to the crisis faced by
the institutions of the late phase of the Gutenberg Galaxy, which
defined the last third []{#Page_4 type="pagebreak" title="4"}of the
twentieth century.[^4^](#f6-note-0004){#f6-note-0004a} The expansion of
the social basis of cultural processes will be traced back to changes in
the labor market, to the self-empowerment of marginalized groups, and to
the dissolution of centralized cultural geography. The broadening of
cultural fields will be discussed in terms of the rise of design as a
general creative discipline, and the growing significance of complex
technologies -- as fundamental components of everyday life -- will be
tracked from the beginnings of independent media up to the development
of the internet as a mass medium. These processes, which at first
unfolded on their own and may have been reversible on an individual
basis, are integrated today and represent a socially dominant component
of the coherent digital condition. From the perspective of cultural
studies and media theory, the second chapter will delineate the already
recognizable features of this new culture. Concerned above all with the
analysis of forms, its focus is thus on the question of "how" cultural
practices operate. It is only because specific forms of culture,
exchange, and expression are prevalent across diverse varieties of
content, social spheres, and locations that it is even possible to speak
of the digital condition in the singular. Three examples of such forms
stand out in particular. *Referentiality* -- that is, the use of
existing cultural materials for one\'s own production -- is an essential
feature of many methods for inscribing oneself into cultural processes.
In the context of unmanageable masses of shifting and semantically open
reference points, the act of selecting things and combining them has
become fundamental to the production of meaning and the constitution of
the self. The second feature that characterizes these processes is
*communality*. It is only through a collectively shared frame of
reference that meanings can be stabilized, possible courses of action
can be determined, and resources can be made available. This has given
rise to communal formations that generate self-referential worlds, which
in turn modulate various dimensions of existence -- from aesthetic
preferences to the methods of biological reproduction and the rhythms of
space and time. In these worlds, the dynamics of network power have
reconfigured notions of voluntary and involuntary behavior, autonomy,
and coercion. The third feature of the new cultural landscape is its
*algorithmicity*. It is characterized, in other []{#Page_5
type="pagebreak" title="5"}words, by automated decision-making processes
that reduce and give shape to the glut of information, by extracting
information from the volume of data produced by machines. This extracted
information is then accessible to human perception and can serve as the
basis of singular and communal activity. Faced with the enormous amount
of data generated by people and machines, we would be blind were it not
for algorithms.
The third chapter will focus on *political dimensions*. These are the
factors that enable the formal dimensions described in the preceding
chapter to manifest themselves in the form of social, political, and
economic projects. Whereas the first chapter is concerned with long-term
and irreversible historical processes, and the second outlines the
general cultural forms that emerged from these changes with a certain
degree of inevitability, my concentration here will be on open-ended
dynamics that can still be influenced. A contrast will be made between
two political tendencies of the digital condition that are already quite
advanced: *post-democracy* and *commons*. Both take full advantage of
the possibilities that have arisen on account of structural changes and
have advanced them even further, though in entirely different
directions. "Post-democracy" refers to strategies that counteract the
enormously expanded capacity for social communication by disconnecting
the possibility to participate in things from the ability to make
decisions about them. Everyone is allowed to voice his or her opinion,
but decisions are ultimately made by a select few. Even though growing
numbers of people can and must take responsibility for their own
activity, they are unable to influence the social conditions -- the
social texture -- under which this activity has to take place. Social
mass media such as Facebook and Google will receive particular attention
as the most conspicuous manifestations of this tendency. Here, under new
structural provisions, a new combination of behavior and thought has
been implemented that promotes the normalization of post-democracy and
contributes to its otherwise inexplicable acceptance in many areas of
society. "Commons," on the contrary, denotes approaches for developing
new and comprehensive institutions that not only directly combine
participation and decision-making but also integrate economic, social,
and ethical spheres -- spheres that Modernity has tended to keep
apart.[]{#Page_6 type="pagebreak" title="6"}
Post-democracy and commons can be understood as two lines of development
that point beyond the current crisis of liberal democracy and represent
new political projects. One can be characterized as an essentially
authoritarian system, the other as a radical expansion and renewal of
democracy, from the notion of representation to that of participation.
Even though I have brought together a number of broad perspectives, I
have refrained from discussing certain topics that a book entitled *The
Digital Condition* might be expected to address, notably the matter of
copyright, for one example. This is easy to explain. As regards the new
forms at the heart of this book, none of these developments requires or
justifies copyright law in its present form. In any case, my thoughts on
the matter were published not long ago in another book, so there is no
need to repeat them here.[^5^](#f6-note-0005){#f6-note-0005a} The theme
of privacy will also receive little attention. This is not because I
share the view, held by proponents of "post-privacy," that it would be
better for all personal information to be made available to everyone. On
the contrary, this position strikes me as superficial and naïve. That
said, the political function of privacy -- to safeguard a degree of
personal autonomy from powerful institutions -- is based on fundamental
concepts that, in light of the developments to be described below,
urgently need to be updated. This is a task, however, that would take me
far beyond the scope of the present
book.[^6^](#f6-note-0006){#f6-note-0006a}
Before moving on to the first chapter, I should first briefly explain my
somewhat unorthodox understanding of the central concepts in the title
of the book -- "condition" and "digital." In what follows, the term
"condition" will be used to designate a cultural condition whereby the
processes of social meaning -- that is, the normative dimension of
existence -- are explicitly or implicitly negotiated and realized by
means of singular and collective activity. Meaning, however, does not
manifest itself in signs and symbols alone; rather, the practices that
engender it and are inspired by it are consolidated into artifacts,
institutions, and lifeworlds. In other words, far from being a symbolic
accessory or mere overlay, culture in fact directs our actions and gives
shape to society. By means of materialization and repetition, meaning --
both as claim and as reality -- is made visible, productive, and
negotiable. People are free to accept it, reject it, or ignore
[]{#Page_7 type="pagebreak" title="7"}it altogether. Social meaning --
that is, meaning shared by multiple people -- can only come about
through processes of exchange within larger or smaller formations.
Production and reception (to the extent that it makes any sense to
distinguish between the two) do not proceed linearly here, but rather
loop back and reciprocally influence one another. In such processes, the
participants themselves determine, in a more or less binding manner, how
they stand in relation to themselves, to each other, and to the world,
and they determine the frame of reference in which their activity is
oriented. Accordingly, culture is not something static or something that
is possessed by a person or a group, but rather a field of dispute that
is subject to the activities of multiple ongoing changes, each happening
at its own pace. It is characterized by processes of dissolution and
constitution that may be collaborative, oppositional, or simply
operating side by side. The field of culture is pervaded by competing
claims to power and mechanisms for exerting it. This leads to conflicts
about which frames of reference should be adopted for different fields
and within different social groups. In such conflicts,
self-determination and external determination interact until a point is
reached at which both sides are mutually constituted. This, in turn,
changes the conditions that give rise to shared meaning and personal
identity.
In what follows, this broadly post-structuralist perspective will inform
my discussion of the causes and formational conditions of cultural
orders and their practices. Culture will be conceived throughout as
something heterogeneous and hybrid. It draws from many sources; it is
motivated by the widest possible variety of desires, intentions, and
compulsions; and it mobilizes whatever resources might be necessary for
the constitution of meaning. This emphasis on the materiality of culture
is also reflected in the concept of the digital. Media are relational
technologies, which means that they facilitate certain types of
connection between humans and
objects.[^7^](#f6-note-0007){#f6-note-0007a} "Digital" thus denotes the
set of relations that, on the infrastructural basis of digital networks,
is realized today in the production, use, and transformation of
material and immaterial goods, and in the constitution and coordination
of personal and collective activity. In this regard, the focus is less
on the dominance of a certain class []{#Page_8 type="pagebreak"
title="8"}of technological artifacts -- the computer, for instance --
and even less on distinguishing between "digital" and "analog,"
"material" and "immaterial." Even in the digital condition, the analog
has not gone away. Rather, it has been re-evaluated and even partially
upgraded. The immaterial, moreover, is never entirely without
materiality. On the contrary, the fleeting impulses of digital
communication depend on global and unmistakably material infrastructures
that extend from mines beneath the surface of the earth, from which rare
earth metals are extracted, all the way into outer space, where
satellites are circling around above us. Such things may be ignored
because they are outside the experience of everyday life, but that does
not mean that they have disappeared or that they are of any less
significance. "Digital" thus refers to historically new possibilities
for constituting and connecting various human and non-human actors,
which is not limited to digital media but rather appears everywhere as a
relational paradigm that alters the realm of possibility for numerous
materials and actors. My understanding of the digital thus approximates
the concept of the "post-digital," which has been gaining currency over
the past few years within critical media cultures. Here, too, the
distinction between "new" and "old" media and all of the ideological
baggage associated with it -- for instance, that the new represents the
future while the old represents the past -- have been rejected. The
aesthetic projects that continue to define the image of the "digital" --
immateriality, perfection, and virtuality -- have likewise been
discarded.[^8^](#f6-note-0008){#f6-note-0008a} Above all, the
"post-digital" is a critical response to this techno-utopian aesthetic
and its attendant economic and political perspectives. According to the
cultural theorist Florian Cramer, the concept accommodates the fact that
"new ethical and cultural conventions which became mainstream with
internet communities and open-source culture are being retroactively
applied to the making of non-digital and post-digital media
products."[^9^](#f6-note-0009){#f6-note-0009a} He thus cites the trend
that process-based practices oriented toward open interaction, which
first developed within digital media, have since begun to appear in more
and more contexts and in an increasing number of
materials.[^10[]{#Page_9 type="pagebreak"
title="9"}^](#f6-note-0010){#f6-note-0010a}
For the historical, cultural-theoretical, and political perspectives
developed in this book, however, the concept of the post-digital is
somewhat problematic, for it requires the narrow context of media art
and its fixation on technology in order to become a viable
counter-position. Without this context, certain misunderstandings are
impossible to avoid. The prefix "post-," for instance, is often
interpreted in the sense that something is over or that we have at least
grasped the matters at hand and can thus turn to something new. The
opposite is true. The most enduringly relevant developments are only now
beginning to adopt a specific form, long after digital infrastructures
and the practices made popular by them have become part of our everyday
lives. Or, as the communication theorist and consultant Clay Shirky puts
it, "Communication tools don\'t get socially interesting until they get
technologically boring."[^11^](#f6-note-0011){#f6-note-0011a} For it is
only today, now that our fascination for this technology has waned and
its promises sound hollow, that culture and society are being defined by
the digital condition in a comprehensive sense. Before, this was the
case in just a few limited spheres. It is this hybridization and
solidification of the digital -- the presence of the digital beyond
digital media -- that lends the digital condition its dominance. As to
the concrete realities in which these things will materialize, this is
currently being decided in an open and ongoing process. The aim of this
book is to contribute to our understanding of this process.[]{#Page_10
type="pagebreak" title="10"}
:::
::: {.section .notesList}
[1](#f6-note-0001a){#f6-note-0001} Dan Biddle, "Five Million Tweets for
\#Eurovision 2014," *Twitter UK* (May 11, 2014), online.
[2](#f6-note-0002a){#f6-note-0002} Ministerium für Kultus, Jugend und
Sport -- Baden-Württemberg, "Bildungsplanreform 2015/2016 -- Verankerung
von Leitprinzipien," online \[--trans.\].
[3](#f6-note-0003a){#f6-note-0003} As early as 1995, Wolfgang Coy
suggested that McLuhan\'s metaphor should be supplanted by the concept
of the "Turing Galaxy," but this never caught on. See his introduction
to the German edition of *The Gutenberg Galaxy*: "Von der Gutenbergschen
zur Turingschen Galaxis: Jenseits von Buchdruck und Fernsehen," in
Marshall McLuhan, *Die Gutenberg Galaxis: Das Ende des Buchzeitalters*,
(Cologne: Addison-Wesley, 1995), pp. vii--xviii.[]{#Page_176
type="pagebreak" title="176"}
[4](#f6-note-0004a){#f6-note-0004} According to the analysis of the
Spanish sociologist Manuel Castells, this crisis began almost
simultaneously in highly developed capitalist and socialist societies,
and it did so for the same reason: the paradigm of "industrialism" had
reached the limits of its productivity. Unlike the capitalist societies,
which were flexible enough to tame the crisis and reorient their
economies, the socialism of the 1970s and 1980s experienced stagnation
until it ultimately, in a belated effort to reform, collapsed. See
Manuel Castells, *End of Millennium*, 2nd edn (Oxford: Wiley-Blackwell,
2010), pp. 5--68.
[5](#f6-note-0005a){#f6-note-0005} Felix Stalder, *Der Autor am Ende
der Gutenberg Galaxis* (Zurich: Buch & Netz, 2014).
[6](#f6-note-0006a){#f6-note-0006} For my preliminary thoughts on this
topic, see Felix Stalder, "Autonomy and Control in the Era of
Post-Privacy," *Open: Cahier on Art and the Public Domain* 19 (2010):
78--86; and idem, "Privacy Is Not the Antidote to Surveillance,"
*Surveillance & Society* 1 (2002): 120--4. For a discussion of these
approaches, see the working paper by Maja van der Velden, "Personal
Autonomy in a Post-Privacy World: A Feminist Technoscience Perspective"
(2011), online.
[7](#f6-note-0007a){#f6-note-0007} Accordingly, the "new social" media
are mass media in the sense that they influence broadly disseminated
patterns of social relations and thus shape society as much as the
traditional mass media had done before them.
[8](#f6-note-0008a){#f6-note-0008} Kim Cascone, "The Aesthetics of
Failure: 'Post-Digital' Tendencies in Contemporary Computer Music,"
*Computer Music Journal* 24/2 (2000): 12--18.
[10](#f6-note-0010a){#f6-note-0010} In the field of visual arts,
similar considerations have been made regarding "post-internet art." See
Artie Vierkant, "The Image Object Post-Internet,"
[jstchillin.org](http://jstchillin.org) (December 2010), online; and Ian
Wallace, "What Is Post-Internet Art? Understanding the Revolutionary New
Art Movement," *Artspace* (March 18, 2014), online.
[11](#f6-note-0011a){#f6-note-0011} Clay Shirky, *Here Comes Everybody:
The Power of Organizing without Organizations* (New York: Penguin,
2008), p. 105.
:::
:::
::: {.section}
Many authors have interpreted the new cultural realities that
characterize our daily lives as a direct consequence of technological
developments: the internet is to blame! This assumption is not only
empirically untenable; it also leads to a problematic assessment of the
current situation. Apparatuses are represented as "central actors," and
this suggests that new technologies have suddenly revolutionized a
situation that had previously been stable. Depending on one\'s point of
view, this is then regarded as "a blessing or a
curse."[^1^](#c1-note-0001){#c1-note-0001a} A closer examination,
however, reveals an entirely different picture. Established cultural
practices and social institutions had already been witnessing the
erosion of their self-evident justification and legitimacy, long before
they were faced with new technologies and the corresponding demands
these make on individuals. Moreover, the allegedly new types of
coordination and cooperation are also not so new after all. Many of them
have existed for a long time. At first most of them were totally
separate from the technologies for which, later on, they would become
relevant. It is only in retrospect that these developments can be
identified as beginnings, and it can be seen that much of what we regard
today as novel or revolutionary was in fact introduced at the margins of
society, in cultural niches that were unnoticed by the dominant actors
and institutions. The new technologies thus evolved against a
[]{#Page_11 type="pagebreak" title="11"}background of processes of
societal transformation that were already under way. They could only
have been developed once a vision of their potential had been
formulated, and they could only have been disseminated where demand for
them already existed. This demand was created by social, political, and
economic crises, which were themselves initiated by changes that were
already under way. The new technologies seemed to provide many differing
and promising answers to the urgent questions that these crises had
prompted. It was thus a combination of positive vision and pressure that
motivated a great variety of actors to change, at times with
considerable effort, the established processes, mature institutions, and
their own behavior. They intended to appropriate, for their own
projects, the various and partly contradictory possibilities that they
saw in these new technologies. Only then did a new technological
infrastructure arise.
This, in turn, created the preconditions for previously independent
developments to come together, strengthening one another and enabling
them to spread beyond the contexts in which they had originated. Thus,
they moved from the margins to the center of culture. And by
intensifying the crisis of previously established cultural forms and
institutions, they became dominant and established new forms and
institutions of their own.
:::
::: {.section}
The Expansion of the Social Basis of Culture {#c1-sec-0002}
--------------------------------------------
Watching television discussions from the 1950s and 1960s today, one is
struck not only by the billows of cigarette smoke in the studio but also
by the homogeneous spectrum of participants. Usually, it was a group of
white and heteronormatively behaving men speaking with one
another,[^2^](#c1-note-0002){#c1-note-0002a} as these were the people
who held the important institutional positions in the centers of the
West. As a rule, those involved were highly specialized representatives
from the cultural, economic, scientific, and political spheres. Above
all, they were legitimized to appear in public to articulate their
opinions, which were to be regarded by others as relevant and worthy of
discussion. They presided over the important debates of their time. With
few exceptions, other actors and their deviant opinions -- there
[]{#Page_12 type="pagebreak" title="12"}has never been a time without
them -- were either not taken seriously at all or were categorized as
indecent, incompetent, perverse, irrelevant, backward, exotic, or
idiosyncratic.[^3^](#c1-note-0003){#c1-note-0003a} Even at that time,
the social basis of culture was beginning to expand, though the actors
at the center of the discourse had failed to notice this. Communicative
and cultural processes were gaining significance in more and more
places, and excluded social groups were self-consciously developing
their own language in order to intervene in the discourse. The rise of
the knowledge economy, the increasingly loud critique of
heteronormativity, and a fundamental cultural critique posed by
post-colonialism enabled a greater number of people to participate in
public discussions. In what follows, I will subject each of these three
phenomena to closer examination. In order to do justice to their
complexity, I will treat them on different levels: I will depict the
rise of the knowledge economy as a structural change in labor; I will
reconstruct the critique of heteronormativity by outlining the origins
and transformations of the gay movement in West Germany; and I will
discuss post-colonialism as a theory that introduced new concepts of
cultural multiplicity and hybridization -- concepts that are now
influencing the digital condition far beyond the limits of the
post-colonial discourse, and often without any reference to this
discourse at all.
::: {.section}
### The growth of the knowledge economy {#c1-sec-0003}
At the beginning of the 1950s, the Austrian-American economist Fritz
Machlup was immersed in his study of the political economy of
monopoly.[^4^](#c1-note-0004){#c1-note-0004a} Among other things, he was
concerned with patents and copyright law. In line with the neo-classical
Austrian School, he considered both to be problematic (because
state-created) monopolies.[^5^](#c1-note-0005){#c1-note-0005a} The
longer he studied the monopoly of the patent system in particular, the
more far-reaching its consequences seemed to him. He maintained that the
patent system was intertwined with something that might be called the
"economy of invention" -- ultimately, patentable insights had to be
produced in the first place -- and that this was in turn part of a much
larger economy of knowledge. The latter encompassed government agencies
as well as institutions of education, research, and development
[]{#Page_13 type="pagebreak" title="13"}(that is, schools, universities,
and certain corporate laboratories), which had been increasing steadily
in number since Roosevelt\'s New Deal. Yet it also included the
expanding media sector and those industries that were responsible for
providing technical infrastructure. Machlup subsumed all of these
institutions and sectors under the concept of the "knowledge economy," a
term of his own invention. Their common feature was that essential
aspects of their activities consisted in communicating things to other
people ("telling anyone anything," as he put it). Thus, the employees
were not only recipients of information or instructions; rather, in one
way or another, they themselves communicated, be it merely as a
secretary who typed up, edited, and forwarded a piece of shorthand
dictation. In his book *The Production and Distribution of Knowledge in
the United States*, published in 1962, Machlup gathered empirical
material to demonstrate that the American economy had entered a new
phase that was distinguished by the production, exchange, and
application of abstract, codified
knowledge.[^6^](#c1-note-0006){#c1-note-0006a} This opinion was no
longer entirely novel at the time, but it had never before been
presented in such an empirically detailed and comprehensive
manner.[^7^](#c1-note-0007){#c1-note-0007a} The extent of the knowledge
economy surprised Machlup himself: in his book, he concluded that as
much as 43 percent of all labor activity was already engaged in this
sector. This high number came about because, until then, no one had put
forward the idea of understanding such a variety of activities as a
single unit.
Machlup\'s categorization was indeed quite innovative, for the dynamics
that propelled the sectors that he associated with one another not only
were very different but also had originated as an integral component in
the development of the industrial production of goods. They were more of
an extension of such production than a break with it. The production and
circulation of goods had been expanding and accelerating as early as the
nineteenth century, though at highly divergent rates from one region or
sector to another. New markets were created in order to distribute goods
that were being produced in greater numbers; new infrastructure for
transportation and communication was established in order to serve these
large markets, which were mostly in the form of national territories
(including their colonies). This []{#Page_14 type="pagebreak"
title="14"}enabled even larger factories to be built in order to
exploit, to an even greater extent, the cost advantages of mass
production. In order to control these complex processes, new professions
arose with different types of competencies and working conditions. The
office became a workplace for an increasing number of people -- men and
women alike -- who, in one form or another, had something to do with
information processing and communication. Yet all of this required not
only new management techniques. Production and products also became more
complex, so that entire corporate sectors had to be restructured.
Whereas the first decisive inventions of the industrial era were still
made by more or less educated tinkerers, during the last third of the
nineteenth century, invention itself came to be institutionalized. In
Germany, Siemens (founded in 1847 as the Telegraphen-Bauanstalt von
Siemens & Halske) exemplifies this transformation. Within 50 years, a
company that began in a proverbial workshop in a Berlin backyard became
a multinational high-tech corporation. It was in such corporate
laboratories, which were established around the year 1900, that the
"industrialization of invention" or the "scientification of industrial
production" took place.[^8^](#c1-note-0008){#c1-note-0008a} In other
words, even the processes employed in factories and the goods that they
produced became knowledge-intensive. Their invention, planning, and
production required a steadily growing expansion of activities, which
today we would refer to as research and development. The informatization
of the economy -- the acceleration of mass production, the comprehensive
application of scientific methods to the organization of labor, and the
central role of research and development in industry -- was hastened
enormously by a world war that was waged on an industrial scale to an
extent that had never been seen before.
Another important factor for the increasing significance of the
knowledge economy was the development of the consumer society. Over the
course of the last third of the nineteenth century, despite dramatic
regional and social disparities, an increasing number of people profited
from the economic growth that the Industrial Revolution had instigated.
Wages increased and basic needs were largely met, so that a new social
stratum arose, the middle class, which was able to spend part of its
income on other things. But on what? First, []{#Page_15 type="pagebreak"
title="15"}new needs had to be created. The more production capacities
increased, the more they had to be rethought in terms of consumption.
Thus, in yet another way, the economy became more knowledge-intensive.
It was now necessary to become familiar with, understand, and stimulate
the interests and preferences of consumers, in order to entice them to
purchase products that they did not urgently need. This knowledge did
little to enhance the material or logistical complexity of goods or
their production; rather, it was reflected in the increasingly extensive
communication about and through these goods. The beginnings of this
development were captured by Émile Zola in his 1883 novel *The Ladies\'
Paradise*, which was set in the new world of a semi-fictitious
department store bearing that name. In its opening scene, the young
protagonist Denise Baudu and her brother Jean, both of whom have just
moved to Paris from a provincial town, encounter for the first time the
artfully arranged women\'s clothing -- exhibited with all sorts of
tricks involving lighting, mirrors, and mannequins -- in the window
displays of the store. The sensuality of the staged goods is so
overwhelming that both of them are not only struck dumb, but Jean even
"blushes."
It was the economy of affects that brought blood to Jean\'s cheeks. At
that time, strategies for attracting the attention of customers did not
yet have a scientific and systematic basis. Just as the first inventions
in the age of industrialization were made by amateurs, so too was the
economy of affects developed intuitively and gradually rather than as a
planned or conscious paradigm shift. That it was possible to induce and
direct affects by means of targeted communication was the pioneering
discovery of the Austrian-American Edward Bernays. During the 1920s, he
combined the ideas of his uncle Sigmund Freud about unconscious
motivations with the sociological research methods of opinion surveys to
form a new discipline: market
research.[^9^](#c1-note-0009){#c1-note-0009a} It became the scientific
basis of a new field of activity, which he at first called "propaganda"
but then later referred to as "public
relations."[^10^](#c1-note-0010){#c1-note-0010a} Public communication,
be it for economic or political ends, was now placed on a systematic
foundation that came to distance itself more and more from the pure
"conveyance of information." Communication became a strategic field for
corporate and political disputes, and the mass media []{#Page_16
type="pagebreak" title="16"}became their locus of negotiation. Between
1880 and 1917, for instance, commercial advertising costs in the United
States increased by more than 800 percent, and the leading advertising
firms, using the same techniques with which they attracted consumers to
products, were successful in selling to the American public the idea of
their nation entering World War I. Thus, a media industry in the modern
sense was born, and it expanded along with the rapidly growing market
for advertising.[^11^](#c1-note-0011){#c1-note-0011a}
In his studies of labor markets conducted at the beginning of the 1960s,
Machlup brought these previously separate developments together and
thus explained the existence of an already advanced knowledge economy in
the United States. His arguments fell on extremely fertile soil, for an
intellectual transformation had taken place in other areas of science as
well. A few years earlier, for instance, cybernetics had given the
concepts "information" and "communication" their first scientifically
precise (if somewhat idiosyncratic) definitions and had assigned to them
a position of central importance in all scientific disciplines, not to
mention life in general.[^12^](#c1-note-0012){#c1-note-0012a} Machlup\'s
investigation seemed to confirm this in the case of the economy, given
that the knowledge economy was primarily concerned with information and
communication. Since then, numerous analyses, formulas, and slogans have
repeated, modified, refined, and criticized the idea that the
knowledge-based activities of the economy have become increasingly
important. In the 1970s this discussion was associated above all with
the notion of the "post-industrial
society,"[^13^](#c1-note-0013){#c1-note-0013a} in the 1980s the guiding
idea was the "information society,"[^14^](#c1-note-0014){#c1-note-0014a}
and in the 1990s the debate revolved around the "network
society"[^15^](#c1-note-0015){#c1-note-0015a} -- to name just the most
popular concepts. What these approaches have in common is that they each
diagnose a comprehensive societal transformation that, as regards the
creation of economic value or jobs, has shifted the balance from
productive to communicative activities. Accordingly, they presuppose
that we know how to distinguish the former from the latter. This is not
unproblematic, however, because in practice the two are usually tightly
intertwined. Moreover, whoever maintains that communicative activities
have taken the place of industrial production in our society has adopted
a very narrow point of []{#Page_17 type="pagebreak" title="17"}view.
Factory jobs have not simply disappeared; they have just been partially
relocated outside of Western economies. The assertion that communicative
activities are somehow of "greater value" hardly chimes with the reality
of today\'s new "service jobs," many of which pay no more than the
minimum wage.[^16^](#c1-note-0016){#c1-note-0016a} Critiques of this
sort, however, have done little to reduce the effectiveness of this
analysis -- especially its political effectiveness -- for it does more
than simply describe a condition. It also contains a set of political
instructions that imply or directly demand that precisely those sectors
should be promoted that it considers economically promising, and that
society should be reorganized accordingly. Since the 1970s, there has
thus been a feedback loop between scientific analysis and political
agendas. More often than not, it is hardly possible to distinguish
between the two. Especially in Britain and the United States, the
economic transformation of the 1980s was imposed insistently and with
political calculation (the weakening of labor unions).
There are, however, important differences between the developments of
the so-called "post-industrial society" of the 1970s and those of the
so-called "network society" of the 1990s, even if both terms are
supposed to stress the increased significance of information, knowledge,
and communication. With regard to the digital condition, the most
important of these differences are the greater flexibility of economic
activity in general and employment relations in particular, as well as
the dismantling of social security systems. Neither phenomenon played
much of a role in analyses of the early 1970s. The development since
then can be traced back to two currents that could not seem more
different from one another. At first, flexibility was demanded in the
name of a critique of the value system imposed by bureaucratic-bourgeois
society (including the traditional organization of the workforce). It
originated in the new social movements that had formed in the late
1960s. Later on, toward the end of the 1970s, it then became one of the
central points of the neoliberal critique of the welfare state. With
completely different motives, both sides sang the praises of autonomy
and spontaneity while rejecting the disciplinary nature of hierarchical
organization. They demanded individuality and diversity rather than
conformity to prescribed roles. Experimentation, openness to []{#Page_18
type="pagebreak" title="18"}new ideas, flexibility, and change were now
established as fundamental values with positive connotations. Both
movements operated with the attractive idea of personal freedom. The new
social movements understood this in a social sense as the freedom of
personal development and coexistence, whereas neoliberals understood it
in an economic sense as the freedom of the market. In the 1980s, the
neoliberal ideas prevailed in large part because some of the values,
strategies, and methods propagated by the new social movements were
removed from their political context and appropriated in order to
breathe new life -- a "new spirit" -- into capitalism and thus to rescue
industrial society from its crisis.[^17^](#c1-note-0017){#c1-note-0017a}
An army of management consultants, restructuring experts, and new
companies began to promote flat hierarchies, self-responsibility, and
innovation; with these aims in mind, they set about reorganizing large
corporations into small and flexible units. Labor and leisure were no
longer supposed to be separated, for all aspects of a given person could
be integrated into his or her work. In order to achieve economic success
in this new capitalism, it became necessary for every individual to
identify himself or herself with his or her profession. Large
corporations were restructured in such a way that entire departments
found themselves transformed into independent "profit centers." This
happened in the name of creating more leeway for decision-making and of
optimizing the entrepreneurial spirit on all levels, the goals being to
increase value creation and to provide management with more fine-grained
powers of intervention. These measures, in turn, created the need for
computers and the need for them to be networked. Large corporations
reacted in this way to the emergence of highly specialized small
companies which, by networking and cooperating with other firms,
succeeded in quickly and flexibly exploiting niches in the expanding
global markets. In the management literature of the 1980s, the
catchphrases for this were "company networks" and "flexible
specialization."[^18^](#c1-note-0018){#c1-note-0018a} By the middle of
the 1990s, the sociologist Manuel Castells was able to conclude that the
actual productive entity was no longer the individual company but rather
the network consisting of companies and corporate divisions of various
sizes. In Castells\'s estimation, the decisive advantage of the network
is its ability to customize its elements and their configuration
[]{#Page_19 type="pagebreak" title="19"}to suit the rapidly changing
requirements of the "project" at
hand.[^19^](#c1-note-0019){#c1-note-0019a} Aside from a few exceptions,
companies in their traditional forms came to function above all as
strategic control centers and as economic and legal units.
This economic structural transformation was already well under way when
the internet emerged as a mass medium around the turn of the millennium.
As a consequence, change became more radical and penetrated into an
increasing number of areas of value creation. The political agenda
oriented itself toward the vision of "creative industries," a concept
developed in 1997 by the newly elected British government under Tony
Blair. A Creative Industries Task Force was established right away, and
its first step was to identify "those activities which have their
origins in individual creativity, skill and talent and which have the
potential for wealth and job creation through the generation and
exploitation of intellectual
property."[^20^](#c1-note-0020){#c1-note-0020a} Like Fritz Machlup at
the beginning of the 1960s, the task force brought together existing
areas of activity into a new category. Such activities included
advertising, computer games, architecture, music, arts and antique
markets, publishing, design, software and computer services, fashion,
television and radio, and film and video. The latter were elevated to
matters of political importance on account of their potential to create
wealth and jobs. Not least because of this clever presentation of
categories -- no distinction was made between the BBC, an almighty
public-service provider, and fledgling companies in precarious
circumstances -- it was possible to proclaim not only that the creative
industries were contributing a relevant portion of the nation\'s
economic output, but also that this sector was growing at an especially
fast rate. It was reported that, in London, the creative industries were
already responsible for one out of every five new jobs. When compared
with traditional terms of employment as regards income, benefits, and
prospects for advancement, however, many of these positions entailed a
considerable downgrade for the employees in question (who were now
treated as independent contractors). This fact was either ignored or
explicitly interpreted as a sign of the sector\'s particular
dynamism.[^21^](#c1-note-0021){#c1-note-0021a} Around the turn of the
new millennium, the idea that individual creativity plays a central role
in the economy was given further traction by []{#Page_20
type="pagebreak" title="20"}the sociologist and consultant Richard
Florida, who argued that creativity was essential to the future of
cities and even announced the rise of the "creative class." As to the
preconditions that have to be met in order to tap into this source of
wealth, he devised a simple formula that would be easy for municipal
bureaucrats to understand: "technology, tolerance and talent." Talent,
as defined by Florida, is based on individual creativity and education
and manifests itself in the ability to generate new jobs. He was thus
able to declare talent a central element of economic
growth.[^22^](#c1-note-0022){#c1-note-0022a} In order to "unleash" these
resources, what we need in addition to technology is, above all,
tolerance; that is, "an open culture -- one that does not discriminate,
does not force people into boxes, allows us to be ourselves, and
validates various forms of family and of human
identity."[^23^](#c1-note-0023){#c1-note-0023a}
The idea that a public welfare state should ensure the social security
of individuals was considered obsolete. Collective institutions, which
could have provided a degree of stability for people\'s lifestyles, were
dismissed or regarded as bureaucratic obstacles. The more or less
directly evoked role model for all of this was the individual artist,
who was understood as an individual entrepreneur, a sort of genius
suitable for the masses. For Florida, a central problem was that,
according to his own calculations, only about a third of the people
living in North American and European cities were working in the
"creative sector," while the innate creativity of everyone else was
going to waste. Even today, the term "creative industry," along with the
assumption that the internet will provide increased opportunities,
serves to legitimize the effort to restructure all areas of the economy
according to the needs of the knowledge economy and to privilege the
network over the institution. In times of social cutbacks and empty
public purses, especially in municipalities, this message was warmly
received. One mayor, who as the first openly gay top politician in
Germany exemplified tolerance for diverse lifestyles, even adopted the
slogan "poor but sexy" for his city. Everyone was supposed to exploit
his or her own creativity to discover new niches and opportunities for
monetization -- a magic formula that was supposed to bring about a new
urban revival. Today there is hardly a city in Europe that does not
issue a report about its creative economy, []{#Page_21 type="pagebreak"
title="21"}and nearly all of these reports cite, directly or indirectly,
Richard Florida.
As already seen in the context of the knowledge economy, so too in the
case of creative industries do measurable social change, wishful
thinking, and political agendas blend together in such a way that it is
impossible to identify a single cause for the developments taking place.
The consequences, however, are significant. Over the last two
generations, the demands of the labor market have fundamentally changed.
Higher education and the ability to acquire new knowledge independently
are now, to an increasing extent, required and expected as
qualifications and personal attributes. The desired or enforced ability
to be flexible at work, the widespread cooperation across institutions,
the uprooted nature of labor, and the erosion of collective models for
social security have displaced many activities, which once took place
within clearly defined institutional or personal limits, into a new
interstitial space that is neither private nor public in the classical
sense. This is the space of networks, communities, and informal
cooperation -- the space of sharing and exchange that has since been
enabled by the emergence of ubiquitous digital communication. It allows
an increasing number of people, whether willingly or otherwise, to
envision themselves as active producers of information, knowledge,
capability, and meaning. And because it is associated in various ways
with the space of market-based exchange and with the bourgeois political
sphere, it has lasting effects on both. This interstitial space becomes
all the more important as fewer people are willing or able to rely on
traditional institutions for their economic security. For, within it,
personal and digital-based networks can and must be developed as
alternatives, regardless of whether they prove sustainable for the long
term. As a result, more and more actors, each with their own claims to
meaning, have been rushing away from the private personal sphere into
this new interstitial space. By now, this has become such a normal
practice that whoever is *not* active in this ever-expanding
interstitial space, which is rapidly becoming the main social sphere --
whoever, that is, lacks a publicly visible profile on social mass media
like Facebook, or does not number among those producing information and
meaning and is thus so inconspicuous online as []{#Page_22
type="pagebreak" title="22"}to yield no search results -- now stands out
in a negative light (or, in far fewer cases, acquires a certain prestige
on account of this very absence).
:::
::: {.section}
### The erosion of heteronormativity {#c1-sec-0004}
In this (sometimes more, sometimes less) public space for the continuous
production of social meaning (and its exploitation), there is no
question that the professional middle class is
over-represented.[^24^](#c1-note-0024){#c1-note-0024a} It would be
short-sighted, however, to reduce those seeking autonomy and the
recognition of individuality and social diversity to the role of poster
children for the new spirit of
capitalism.[^25^](#c1-note-0025){#c1-note-0025a} The new social
movements, for instance, initiated a social shift that has allowed an
increasing number of people to demand, if nothing else, the right to
participate in social life in a self-determined manner; that is,
according to their own standards and values.
Especially effective was the critique of patriarchal and heteronormative
power relations, modes of conduct, and
identities.[^26^](#c1-note-0026){#c1-note-0026a} In the context of the
political upheavals at the end of the 1960s, the new women\'s and gay
movements developed into influential actors. Their greatest achievement
was to establish alternative cultural forms, lifestyles, and strategies
of action in or around the mainstream of society. How this was done can
be demonstrated by tracing, for example, the development of the gay
movement in West Germany.
In the fall of 1969, the liberalization of Paragraph 175 of the German
Criminal Code came into effect. From then on, sexual activity between
adult men was no longer punishable by law (women were not mentioned in
this context). For the first time, a man could now express himself as a
homosexual outside of semi-private space without immediately being
exposed to the risk of criminal prosecution. This was a necessary
precondition for the ability to defend one\'s own rights. As early as
1971, the struggle for the recognition of gay life experiences reached
the broader public when Rosa von Praunheim\'s film *It Is Not the
Homosexual Who Is Perverse, but the Society in Which He Lives* was
screened at the Berlin International Film Festival and then, shortly
thereafter, broadcast on public television in North Rhine-Westphalia.
The film, which is firmly situated in the agitprop tradition,
[]{#Page_23 type="pagebreak" title="23"}follows a young provincial man
through the various milieus of Berlin\'s gay subcultures: from a
monogamous relationship to nightclubs and public bathrooms until, at the
end, he is enlightened by a political group of men who explain that it
is not possible to lead a free life in a niche, as his own emancipation
can only be achieved by a transformation of society as a whole. The film
closes with a not-so-subtle call to action: "Out of the closets, into
the streets!" Von Praunheim understood this emancipation to be a process
that encompassed all areas of life and had to be carried out in public;
it could only achieve success, moreover, in solidarity with other
freedom movements such as the Black Panthers in the United States and
the new women\'s movement. The goal, according to this film, is to
articulate one\'s own identity as a specific and differentiated identity
with its own experiences, values, and reference systems, and to anchor
this identity within a society that not only tolerates it but also
recognizes it as having equal validity.
At first, however, the film triggered vehement controversies, even
within the gay scene. The objection was that it attacked the gay
subculture, which was not yet prepared to defend itself publicly against
discrimination. Despite or (more likely) because of these controversies,
more than 50 groups of gay activists soon formed in Germany. Such
groups, largely composed of left-wing alternative students, included,
for instance, the Homosexuelle Aktion Westberlin (HAW) and the Rote
Zelle Schwul (RotZSchwul) in Frankfurt am
Main.[^27^](#c1-note-0027){#c1-note-0027a} One focus of their activities
was to have Paragraph 175 struck entirely from the legal code (which was
not achieved until 1994). This cause was framed within a general
struggle to overcome patriarchy and capitalism. At the earliest gay
demonstrations in Germany, which took place in Münster in April 1972,
protesters rallied behind the following slogan: "Brothers and sisters,
gay or not, it is our duty to fight capitalism." This was understood as
a necessary subordination to the greater struggle against what was known
in the terminology of left-wing radical groups as the "main
contradiction" of capitalism (that between capital and labor), and it
led to strident differences within the gay movement. The dispute
escalated during the next year. After the so-called *Tuntenstreit*, or
"Battle of the Queens," which was []{#Page_24 type="pagebreak"
title="24"}initiated by activists from Italy and France who had appeared
in drag at the closing ceremony of the HAW\'s Spring Meeting in West
Berlin, the gay movement was divided, or at least moving in a new
direction. At the heart of the matter were the following questions: "Is
there an inherent (many speak of an autonomous) position that gays hold
with respect to the issue of homosexuality? Or can a position on
homosexuality only be derived in association with the traditional
workers\' movement?"[^28^](#c1-note-0028){#c1-note-0028a} In other
words, was discrimination against homosexuality part of the social
divide caused by capitalism (that is, one of its "ancillary
contradictions") and thus only to be overcome by overcoming capitalism
itself, or was it something unrelated to the "essence" of capitalism, an
independent conflict requiring different strategies and methods? This
conflict could never be fully resolved, but the second position, which
was more interested in overcoming legal, social, and cultural
discrimination than in struggling against economic exploitation, and
which focused specifically on the social liberation of gays, proved to
be far more dynamic in the long term. This was not least because both
the old and new left were themselves not free of homophobia and because
the entire radical student movement of the 1970s fell into crisis.
Over the course of the 1970s and 1980s, "aesthetic self-empowerment" was
realized through the efforts of artistic and (increasingly) commercial
producers of images, texts, and
sounds.[^29^](#c1-note-0029){#c1-note-0029a} Activists, artists, and
intellectuals developed a language with which they could speak
assertively in public about topics that had previously been taboo.
Inspired by the expression "gay pride," which originated in the United
States, they began to use the term *schwul* ("gay"), which until then
had possessed negative connotations, with growing confidence. They
founded numerous gay and lesbian cultural initiatives, theaters,
publishing houses, magazines, bookstores, meeting places, and other
associations in order to counter the misleading or (in their eyes)
outright false representations of the mass media with their own
multifarious media productions. In doing so, they typically followed a
dual strategy: on the one hand, they wanted to create a space for the
members of the movement in which it would be possible to formulate and
live different identities; on the other hand, they were fighting to be
accepted by society at large. While []{#Page_25 type="pagebreak"
title="25"}a broader and broader spectrum of gay positions, experiences,
and aesthetics was becoming visible to the public, the connection to
left-wing radical contexts became weaker. Founded as early as 1974, and
likewise in West Berlin, the General Homosexual Working Group
(Allgemeine Homosexuelle Arbeitsgemeinschaft) sought to integrate gay
politics into mainstream society by defining the latter -- on the basis
of bourgeois, individual rights -- as a "politics of
anti-discrimination." These efforts achieved a milestone in 1980 when,
in the run-up to the parliamentary election, a podium discussion was
held with representatives of all major political parties on the topic of
the law governing sexual offences. The discussion took place in the
Beethovenhalle in Bonn, which was the largest venue for political events
in the former capital. Several participants considered the event to be a
"disaster,"[^30^](#c1-note-0030){#c1-note-0030a} for it revived a number
of internal conflicts (not least that between revolutionary and
integrative positions). Yet the fact remains that representatives were
present from every political party, and this alone was indicative of an
unprecedented amount of public awareness for those demanding equal
rights.
The struggle against discrimination and for social recognition reached
an entirely new level of urgency with the outbreak of HIV/AIDS. In 1983,
the magazine *Der Spiegel* devoted its first cover story to the disease,
thus bringing it to the awareness of the broader public. In the same
year, the non-profit organization Deutsche Aids-Hilfe was founded to
prevent further cases of discrimination, for *Der Spiegel* was not the
only publication at the time to refer to AIDS as a "homosexual
epidemic."[^31^](#c1-note-0031){#c1-note-0031a} The struggle against
HIV/AIDS required a comprehensive mobilization. Funding had to be raised
in order to deal with the social repercussions of the epidemic, to teach
people about safe sexual practices for everyone and to direct research
toward discovering causes and developing potential cures. The immediate
threat that AIDS represented, especially while so little was known about
the illness and its treatment remained a distant hope, created an
impetus for mobilization that led to alliances between the gay movement,
the healthcare system, and public authorities. Thus, the AIDS Inquiry
Committee, sponsored by the conservative Christian Democratic Union,
concluded in 1988 that, in the fight against the illness, "the
homosexual subculture is []{#Page_26 type="pagebreak"
title="26"}especially important. This informal structure should
therefore neither be impeded nor repressed but rather, on the contrary,
recognized and supported."[^32^](#c1-note-0032){#c1-note-0032a} The AIDS
crisis proved to be a catalyst for advancing the integration of gays
into society and for expanding what could be regarded as acceptable
lifestyles, opinions, and cultural practices. As a consequence,
homosexuals began to appear more frequently in the media, though their
presence would never match that of heterosexuals. As of 1985, the
television show *Lindenstraße* featured an openly gay protagonist, and
the first kiss between men was aired in 1987. The episode still provoked
a storm of protest -- Bayerische Rundfunk refused to broadcast it a
second time -- but this was already a rearguard action and the
integration of gays (and lesbians) into the social mainstream continued.
In 1993, the first gay and lesbian city festival took place in Berlin,
and the first Rainbow Parade was held in Vienna in 1996. In 2002, the
Cologne Pride Day involved 1.2 million participants and attendees, thus
surpassing for the first time the attendance at the traditional Rose
Monday parade. By the end of the 1990s, the sociologist Rüdiger Lautmann
was already prepared to maintain: "To be homosexual has become
increasingly normalized, even if homophobia lives on in the depths of
the collective disposition."[^33^](#c1-note-0033){#c1-note-0033a} This
normalization was also reflected in a study published by the Ministry of
Justice in the year 2000, which stressed "the similarity between
homosexual and heterosexual relationships" and, on this basis, made an
argument against discrimination.[^34^](#c1-note-0034){#c1-note-0034a}
Around the year 2000, however, the classical gay movement had already
passed its peak. A profound transformation had begun to take place in
the middle of the 1990s. It lost its character as a new social movement
(in the style of the 1970s) and began to splinter inwardly and
outwardly. One could say that it transformed from a mass movement into a
multitude of variously networked communities. The clearest sign of this
transformation is the abbreviation "LGBT" (lesbian, gay, bisexual, and
transgender), which, since the mid-1990s, has represented the internal
heterogeneity of the movement as it has shifted toward becoming a
network.[^35^](#c1-note-0035){#c1-note-0035a} At this point, the more
radical actors were already speaking against the normalization of
homosexuality. Queer theory, for example, was calling into question the
"essentialist" definition of gender []{#Page_27 type="pagebreak"
title="27"}-- that is, any definition reducing it to an immutable
essence -- with respect to both its physical dimension (sex) and its
social and cultural dimension (gender
proper).[^36^](#c1-note-0036){#c1-note-0036a} It thus opened up a space
for the articulation of experiences, self-descriptions, and lifestyles
that, on every level, are located beyond the classical attributions of
men and women. A new generation of intellectuals, activists, and artists
took the stage and developed -- yet again through acts of aesthetic
self-empowerment -- a language that enabled them to import, with
confidence, different self-definitions into the public sphere. An
example of this is the adoption of inclusive plural forms in German
(*Aktivist\_innen* "activists," *Künstler\_innen* "artists"), which draw
attention to the gaps and possibilities between male and female
identities that are also expressed in the language itself. Just as with
the terms "gay" or *schwul* some 30 years before, in this case, too, an
important element was the confident and public adoption and semantic
conversion of a formerly insulting word ("queer") by the very people and
communities against whom it used to be
directed.[^37^](#c1-note-0037){#c1-note-0037a} Likewise observable in
these developments was the simultaneity of social (amateur) and
artistic/scientific (professional) cultural production. The goal,
however, was less to produce a clear antithesis than it was to oppose
rigid attributions by underscoring mutability, hybridity, and
uniqueness. Both the scope of what could be expressed in public and the
circle of potential speakers expanded yet again. And, at least to some
extent, the drag queen Conchita Wurst popularized complex gender
constructions that went beyond the simple woman/man dualism. All of that
said, the assertion by Rüdiger Lautmann quoted above -- "homophobia
lives on in the depths of the collective disposition" -- continued to
hold true.
If the gay movement is representative of the social liberation of the
1970s and 1980s, then it is possible to regard its transformation into
the LGBT movement during the 1990s -- with its multiplicity and fluidity
of identity models and its stress on mutability and hybridity -- as a
sign of the reinvention of this project within the context of an
increasingly dominant digital condition. With this transformation,
however, the diversification and fluidification of cultural practices
and social roles have not yet come to an end. Ways of life that were
initially subcultural and facing existential pressure []{#Page_28
type="pagebreak" title="28"}are gradually entering the mainstream. They
are expanding the range of readily available models of identity for
anyone who might be interested, be it with respect to family forms
(e.g., patchwork families, adoption by same-sex couples), diets (e.g.,
vegetarianism and veganism), healthcare (e.g., anti-vaccination), or
other principles of life and belief. All of them are seeking public
recognition for a new frame of reference for social meaning that has
originated from their own activity. This is necessarily a process
characterized by conflicts and various degrees of resistance, including
right-wing populism that seeks to defend "traditional values," but many
of these movements will ultimately succeed in providing more people with
the opportunity to speak in public, thus broadening the palette of
themes that are considered to be important and legitimate.
:::
::: {.section}
### Beyond center and periphery {#c1-sec-0005}
In order to reach a better understanding of the complexity involved in
the expanding social basis of cultural production, it is necessary to
shift yet again to a different level. For, just as it would be myopic to
examine the multiplication of cultural producers only in terms of
professional knowledge workers from the middle class, it would likewise
be insufficient to situate this multiplication exclusively in the
centers of the West. The entire system of categories that justified the
differentiation between the cultural "center" and the cultural
"periphery" has begun to falter. This complex and multilayered process
has been formulated and analyzed by the theory of "post-colonialism."
Long before digital media made the challenge of cultural multiplicity a
quotidian issue in the West, proponents of this theory had developed
languages and terminologies for negotiating different positions without
needing to impose a hierarchical order.
Since the 1970s, the theoretical current of post-colonialism has been
examining the cultural and epistemic dimensions of colonialism that,
even after its end as a territorial system, have remained responsible
for the continuation of dependent relations and power differentials. For
my purposes -- which are to develop a European perspective on the
factors ensuring that more and more people are able to participate in
cultural []{#Page_29 type="pagebreak" title="29"}production -- two
points are especially relevant because their effects reverberate in
Europe itself. First is the deconstruction of the categories "West" (in
the sense of the center) and "East" (in the sense of the periphery). And
second is the focus on hybridity as a specific way for non-Western
actors to deal with the dominant cultures of former colonial powers,
which have continued to determine significant portions of globalized
culture. The terms "West" and "East," "center" and "periphery," do not
simply describe existing conditions; rather, they are categories that
contribute, in an important way, to the creation of the very conditions
that they presume to describe. This may sound somewhat circular, but it
is precisely from this circularity that such cultural classifications
derive their strength. The world that they illuminate is immersed in
their own light. The category "East" -- or, to use the term of the
literary theorist Edward Said,
"orientalism"[^38^](#c1-note-0038){#c1-note-0038a} -- is a system of
representation that pervades Western thinking. Within this system,
Europe or the West (as the center) and the East (as the periphery)
represent asymmetrical and antithetical concepts. This construction
achieves a dual effect. As a self-description, on the one hand, it
contributes to the formation of our own identity, for Europeans
attribute to themselves and to their continent such features as
"rationality," "order," and "progress," while on the other hand
identifying the alternative with "superstition," "chaos," or
"stagnation." The East, moreover, is used as an exotic projection screen
for our own suppressed desires. According to Said, a representational
system of this sort can only take effect if it becomes "hegemonic"; that
is, if it is perceived as self-evident and no longer as an act of
attribution but rather as one of description, even and precisely by
those against whom the system discriminates. Said\'s accomplishment is
to have worked out how far-reaching this system was and, in many areas,
it remains so today. It extended (and extends) from scientific
disciplines, whose researchers discussed (until the 1980s) the theory of
"oriental despotism,"[^39^](#c1-note-0039){#c1-note-0039a} to literature
and art -- the motif of the harem was especially popular, particularly
in paintings of the late nineteenth
century[^40^](#c1-note-0040){#c1-note-0040a} -- all the way to everyday
culture, where, as of 1913 in the United States, the cigarette brand
Camel (introduced to compete with the then-leading brand, Fatima) was
meant to evoke the []{#Page_30 type="pagebreak" title="30"}mystique and
sensuality of the Orient.[^41^](#c1-note-0041){#c1-note-0041a} This
system of representation, however, was more than a means of describing
oneself and others; it also served to legitimize the allocation of all
knowledge and agency on to one side, that of the West. Such an order was
not restricted to culture; it also created and legitimized a sense of
domination for colonial projects.[^42^](#c1-note-0042){#c1-note-0042a}
This cultural legitimation, as Said points out, also persists after the
end of formal colonial domination and continues to marginalize the
postcolonial subjects. As before, they are unable to speak for
themselves and therefore remain in the dependent periphery, which is
defined by their subordinate position in relation to the center. Said
directed the focus of critique to this arrangement of center and
periphery, which he saw as being (re)produced and legitimized on the
cultural level. From this arose the demand that everyone should have the
right to speak, to place him- or herself in the center. To achieve this,
it was necessary first of all to develop a language -- indeed, a
cultural landscape -- that can manage without a hegemonic center and is
thus oriented toward multiplicity instead of
uniformity.[^43^](#c1-note-0043){#c1-note-0043a}
A somewhat different approach has been taken by the literary theorist
Homi K. Bhabha. He proceeds from the idea that the colonized never fully
passively adopt the culture of the colonialists -- the "English book,"
as he calls it. Their previous culture is never simply wiped out and
replaced by another. What always and necessarily occurs is rather a
process of hybridization. This concept, according to Bhabha,
::: {.extract}
suggests that all of culture is constructed around negotiations and
conflicts. Every cultural practice involves an attempt -- sometimes
good, sometimes bad -- to establish authority. Even classical works of
art, such as a painting by Brueghel or a composition by Beethoven, are
concerned with the establishment of cultural authority. Now, this poses
the following question: How does one function as a negotiator when
one\'s own sense of agency is limited, for instance, on account of being
excluded or oppressed? I think that, even in the role of the underdog,
there are opportunities to upend the imposed cultural authorities -- to
accept some aspects while rejecting others. It is in this way that
symbols of authority are hybridized and made into something of one\'s
own. For me, hybridization is not simply a mixture but rather a
[]{#Page_31 type="pagebreak" title="31"}strategic and selective
appropriation of meanings; it is a way to create space for negotiators
whose freedom and equality are
endangered.[^44^](#c1-note-0044){#c1-note-0044a}
:::
Hybridization is thus a cultural strategy for evading marginality that
is imposed from the outside: subjects, who from the dominant perspective
are incapable of doing so, appropriate certain aspects of culture for
themselves and transform them into something else. What is decisive is
that this hybrid, created by means of active and unauthorized
appropriation, opposes the dominant version and the resulting speech is
thus legitimized from another -- that is, from one\'s own -- position.
In this way, a cultural engagement is set under way and the superiority
of one meaning or another is called into question. Who has the right to
determine how and why a relationship with others should be entered,
which resources should be appropriated from them, and how these
resources should be used? At the heart of the matter lie the abilities
of speech and interpretation; these can be seized in order to create
space for a "cultural hybridity that entertains difference without an
assumed or imposed hierarchy."[^45^](#c1-note-0045){#c1-note-0045a}
At issue is thus a strategy for breaking down hegemonic cultural
conditions, which distribute agency in a highly uneven manner, and for
turning one\'s own cultural production -- which has been dismissed by
cultural authorities as flawed, misconceived, or outright ignorant --
into something negotiable and independently valuable. Bhabha is thus
interested in fissures, differences, diversity, multiplicity, and
processes of negotiation that generate something like shared meaning --
culture, as he defines it -- instead of conceiving of it as something
that precedes these processes and is threatened by them. Accordingly, he
proceeds not from the idea of unity, which is threatened whenever
"others" are empowered to speak and needs to be preserved, but rather
from the irreducible multiplicity that, through laborious processes, can
be brought into temporary and limited consensus. Bhabha\'s vision of
culture is one without immutable authorities, interpretations, and
truths. In theory, everything can be brought to the table. This is not a
situation in which anything goes, yet the central meaning of
negotiation, the contextuality of consensus, and the mutability of every
frame of reference []{#Page_32 type="pagebreak" title="32"}-- none of
which can be shared equally by everyone -- are always potentially
negotiable.
Post-colonialism draws attention to the "disruptive power of the
excluded-included third," which becomes especially virulent when it
"emerges in the middle of semantic
structures."[^46^](#c1-note-0046){#c1-note-0046a} The recognition of
this power reveals the increasing cultural independence of those
formerly colonized, and it also transforms the cultural self-perception
of the West, for, even in Western nations that were not significant
colonial powers, there are multifaceted tensions between dominant
cultures and those who are on the defensive against discrimination and
attributions by others. Instead of relying on the old recipe of
integration through assimilation (that is, the dissolution of the
"other"), the right to self-determined difference is being called for
more emphatically. In such a manner, collective identities, such as
national identities, are freed from their questionable appeals to
cultural homogeneity and essentiality, and reconceived in terms of the
experience of immanent difference. Instead of one binding and
unnegotiable frame of reference for everyone, which hierarchizes
individual positions and makes them appear unified, a new order without
such limitations needs to be established. Ultimately, the aim is to
provide nothing less than an "alternative reading of
modernity,"[^47^](#c1-note-0047){#c1-note-0047a} which influences both
the construction of the past and the modalities of the future. For
European culture in particular, such a project is an immense challenge.
Of course, these demands do not derive their everyday relevance
primarily from theory but rather from the experiences of
(de)colonization, migration, and globalization. Multifaceted as it is,
however, the theory does provide forms and languages for articulating
these phenomena, legitimizing new positions in public debates, and
attacking persistent mechanisms of cultural marginalization. It helps to
empower broader societal groups to become actively involved in cultural
processes, namely people, such as migrants and their children, whose
identity and experience are essentially shaped by non-Western cultures.
The latter have been giving voice to their experiences more frequently
and with greater confidence in all areas of public life, be it in
politics, literature, music, or
art.[^48^](#c1-note-0048){#c1-note-0048a} In Germany, for instance, the
films by Fatih Akin (*Head-On* from 2004 and *Soul Kitchen* from 2009,
to []{#Page_33 type="pagebreak" title="33"}name just two), in which the
experience of immigration is represented as part of the German
experience, have reached a wide public audience. In 2002, the group
Kanak Attak organized a series of conferences with the telling motto *no
integración*, and these did much to introduce postcolonial positions to
the debates taking place in German-speaking
countries.[^49^](#c1-note-0049){#c1-note-0049a} For a long time,
politicians with "migration backgrounds" were considered to be competent
in only one area, namely integration policy. This has since changed,
though not entirely. In 2008, for instance, Cem Özdemir was elected
co-chair of the Green Party and thus shares responsibility for all of
its political positions. Developments of this sort have been enabled
(and strengthened) by a shift in society\'s self-perception. In 2014,
Cemile Giousouf, the integration commissioner for the conservative
CDU/CSU alliance in the German Parliament, was able to make the
following statement without inciting any controversy: "Over the past few
years, Germany has become a modern land of
immigration."[^50^](#c1-note-0050){#c1-note-0050a} A remarkable
proclamation. Not ten years earlier, her party colleague Norbert Lammert
had expressed, in his function as parliamentary president, interest in
reviving the debate about the term "leading culture." The increasingly
well-educated migrants of the first, second, or third generation no
longer accept the choice of being either marginalized as an exotic
representative of the "other" or entirely assimilated. Rather, they are
insisting on being able to introduce their specific experience as a
constitutive contribution to the formation of the present -- in
association and in conflict with other contributions, but at the same
level and with the same legitimacy. It is no surprise that various forms
of discrimination and violence against "foreigners" not only continue
in everyday life but have also been increasing in reaction to this new
situation. Ultimately, established claims to power are being called into
question.
To summarize, at least three secular historical tendencies or movements,
some of which can be traced back to the late nineteenth century but each
of which gained considerable momentum during the last third of the
twentieth (the spread of the knowledge economy, the erosion of
heteronormativity, and the focus of post-colonialism on cultural
hybridity), have greatly expanded the sphere of those who actively
negotiate []{#Page_34 type="pagebreak" title="34"}social meaning. In
large part, the patterns and cultural foundations of these processes
developed long before the internet. Through the use of the internet, and
through the experiences of dealing with it, they have encroached upon
far greater portions of all societies.
:::
:::
::: {.section}
The Culturalization of the World {#c1-sec-0006}
--------------------------------
The number of participants in cultural processes, however, is not the
only thing that has increased. Parallel to that development, the field
of the cultural has expanded as well -- that is, those areas of life
that are not simply characterized by unalterable necessities, but rather
contain or generate competing options and thus require conscious
decisions.
The term "culturalization of the economy" refers to the central position
of knowledge-based, meaning-based, and affect-oriented processes in the
creation of value. With the emergence of consumption as the driving
force behind the production of goods and the concomitant necessity of
having not only to satisfy existing demands but also to create new ones,
the cultural and affective dimensions of the economy began to gain
significance. I have already discussed the beginnings of product
staging, advertising, and public relations. In addition to all of the
continuities that remain with us from that time, it is also possible to
point out a number of major changes that consumer society has undergone
since the late 1960s. These changes can be delineated by examining the
greater role played by design, which has been called the "core
discipline of the creative
economy."[^51^](#c1-note-0051){#c1-note-0051a}
As a field of its own, design originated alongside industrialization,
when, in collaborative processes, the activities of planning and
designing were separated from those of carrying out
production.[^52^](#c1-note-0052){#c1-note-0052a} It was not until the
modern era that designers consciously endeavored to seek new forms for
the logic inherent to mass production. With the aim of economic
efficiency, they intended their designs to optimize the clearly defined
functions of anonymous and endlessly reproducible objects. At the end of
the nineteenth century, the architect Louis Sullivan, whose buildings
still distinguish the skyline of Chicago, condensed this new attitude
into the famous axiom []{#Page_35 type="pagebreak" title="35"}"form
follows function." Mies van der Rohe, working as an architect in Chicago
in the middle of the twentieth century, supplemented this with a pithy
and famous formulation of his own: "less is more." The rationality of
design, in the sense of isolating and improving specific functions, and
the economical use of resources were of chief importance to modern
(industrial) designers. Even the ten design principles of Dieter Rams,
who led the design division of the consumer products company Braun from
1965 to 1991 -- one of the main sources of inspiration for Jonathan Ive,
Apple\'s chief design officer -- aimed to make products "usable,"
"understandable," "honest," and "long-lasting." "Good design," according
to his guiding principle, "is as little design as
possible."[^53^](#c1-note-0053){#c1-note-0053a} This orientation toward
the technical and functional promised to solve problems for everyone in
a long-term and binding manner, for the inherent material and design
qualities of an object were supposed to make it independent from
changing times and from the tastes of consumers.
::: {.section}
### Beyond the object {#c1-sec-0007}
At the end of the 1960s, a new generation of designers rebelled against
this industrial and instrumental rationality, which was now felt to be
authoritarian, soulless, and reductionist. In the works associated with
"anti-design" or "radical design," the objectives of the discipline were
redefined and a new formal language was developed. In the place of
technical and functional optimization, recombination -- ecological
recycling or the postmodern interplay of forms -- emerged as a design
method and aesthetic strategy. Moreover, the aspiration of design
shifted from the individual object to its entire social and material
environment. The processes of design and production, which had been
closed off from one another and restricted to specialists, were opened
up precisely to encourage the participation of non-designers, be it
through interdisciplinary cooperation with other types of professions or
through the empowerment of laymen. The objectives of design were
radically expanded: rather than ending with the completion of an
individual product, it was now supposed to engage with society. In the
sense of cybernetics, this was regarded as a "system," controlled by
feedback processes, []{#Page_36 type="pagebreak" title="36"}which
connected social, technical, and biological dimensions to one
another.[^54^](#c1-note-0054){#c1-note-0054a} Design, according to this
new approach, was meant to be a "socially significant
activity."[^55^](#c1-note-0055){#c1-note-0055a}
Embedded in the social movements of the 1960s and 1970s, this new
generation of designers was curious about the social and political
potential of their discipline, and about possibilities for promoting
flexibility and autonomy instead of rigid industrial efficiency. Design
was no longer expected to solve problems once and for all, for such an
idea did not correspond to the self-perception of an open and mutable
society. Rather, it was expected to offer better opportunities for
enabling people to react to continuously changing conditions. A radical
proposal was developed by the Italian designer Enzo Mari, who in 1974
published his handbook *Autoprogettazione* (Self-Design). It contained
19 simple designs with which people could make, on their own,
aesthetically and functionally sophisticated furniture out of pre-cut
pieces of wood. In this case, the designs themselves were less important
than the critique of conventional design as elitist and of consumer
society as alienated and wasteful. Mari\'s aim was to reconceive the
relations among designers, the manufacturing industry, and users.
Increasingly, design came to be understood as a holistic and open
process. Victor Papanek, the founder of ecological design, took things a
step further. For him, design was "basic to all human activity. The
planning and patterning of any act towards a desired, foreseeable end
constitutes the design process. Any attempt to separate design, to make
it a thing-by-itself, works counter to the inherent value of design as
the primary underlying matrix of
life."[^56^](#c1-note-0056){#c1-note-0056a}
Potentially all aspects of life could therefore fall under the purview
of design. This came about from the desire to oppose industrialism,
which was blind to its catastrophic social and ecological consequences,
with a new and comprehensive manner of seeing and acting that was
unrestricted by economics.
Toward the end of the 1970s, this expanded notion of design owed less
and less to emancipatory social movements, and its socio-political goals
began to fall by the wayside. Three fundamental patterns survived,
however, which go beyond design and remain characteristic of the
culturalization []{#Page_37 type="pagebreak" title="37"}of the economy:
the discovery of the public as emancipated users and active
participants; the use of appropriation, transformation, and
recombination as methods for creating ever-new aesthetic
differentiations; and, finally, the intention of shaping the lifeworld
of the user.[^57^](#c1-note-0057){#c1-note-0057a}
As these patterns became depoliticized and commercialized, the focus of
designing the "lifeworld" shifted more and more toward designing the
"experiential world." By the end of the 1990s, this had become so
normalized that even management consultants could assert that
"\[e\]xperiences represent an existing but previously unarticulated
*genre of economic output*."[^58^](#c1-note-0058){#c1-note-0058a} It was
possible to define the dimensions of the experiential world in various
ways. For instance, it could be clearly delimited and product-oriented,
like the flagship stores introduced by Nike in 1990, which, with their
elaborate displays, were meant to turn shopping into an experience. This
experience, as the company\'s executives hoped, radiated outward and
influenced how the brand was perceived as a whole. The experiential
world could also, however, be conceived in somewhat broader terms, for
instance by designing entire institutions around the idea of creating a
more attractive work environment and thereby increasing the commitment
of employees. This approach is widespread today in creative industries
and has become popularized through countless stories about ping-pong
tables, gourmet cafeterias, and massage rooms in certain offices. In
this case, the process of creativity is applied back to itself in order
to systematize and optimize a given workplace\'s basis of operation. The
development is comparable to the "invention of invention" that
characterized industrial research around the end of the nineteenth
century, though now the concept has been relocated to the field of
knowledge production.
Yet the "experiential world" can be expanded even further, for instance
when entire cities attempt to make themselves attractive to
international clientele and compete with others by building spectacular
museums or sporting arenas. Displays in cities, as well as a few other
central locations, are regularly constructed in order to produce a
particular experience. This also entails, however, that certain forms of
use that fail to fit the "urban
script"[^59^](#c1-note-0059){#c1-note-0059a} are pushed to the margins
or driven away.[^60^](#c1-note-0060){#c1-note-0060a} Thus, today, there
is hardly a single area of life to []{#Page_38 type="pagebreak"
title="38"}which the strategies and methods of design do not have
access, and this access occurs at all levels. For some time, design has
not been a purely visible matter, restricted to material objects; it
rather forms and controls all of the senses. Cities, for example, have
come to be understood increasingly as "sound spaces" and have
accordingly been reconfigured with the goal of modulating their various
noises.[^61^](#c1-note-0061){#c1-note-0061a} Yet design is no longer
just a matter of objects, processes, and experiences. By now, in the
context of reproductive medicine, it has even been applied to the
biological foundations of life ("designer babies"). I will revisit this
topic below.
:::
Of course, design is not the only field of culture that has imposed
itself over society as a whole. A similar development has occurred in
the field of advertising, which, since the 1970s, has been integrated
into many more physical and social spaces and by now has a broad range
of methods at its disposal. Advertising is no longer found simply on
billboards or in display windows. In the form of "guerilla marketing" or
"product placement," it has penetrated every space and occupied every
discourse -- by blending with political messages, for instance -- and
can now even be spread, as "viral marketing," by the addressees of the
advertisements themselves. Similar processes can be observed in the
fields of art, fashion, music, theater, and sports. This has taken place
perhaps most radically in the field of "gaming," which has drawn upon
technical progress in the most direct possible manner and, with the
spread of powerful computers and mobile applications, has left behind
the confines of the traditional playing field. In alternate reality
games, the realm of the virtual and fictitious has also been
transcended, as physical spaces have been overlaid with their various
scripts.[^62^](#c1-note-0062){#c1-note-0062a}
This list could be extended, but the basic trend is clear enough,
especially as the individual fields overlap and mutually influence one
another. They are blending into a single interdependent field for
generating social meaning in the form of economic activity. Moreover,
through digitalization and networking, many new opportunities have
arisen for large-scale involvement by the public in design processes.
Thanks []{#Page_39 type="pagebreak" title="39"}to new communication
technologies and flexible production processes, today\'s users can
personalize and create products to suit their wishes. Here, the spectrum
extends from tiny batches of creative-industrial products all the way to
global processes of "mass customization," in which factory-based mass
production is combined with personalization. One of the first
applications of this was introduced in 1999 when, through its website, a
sporting-goods company allowed customers to design certain elements of a
shoe by altering it within a set of guidelines. This was taken a step
further by the idea of "user-centered innovation," which relies on the
specific knowledge of users to enhance a product, with the additional
hope of discovering unintended applications and transforming these into
new areas of business.[^63^](#c1-note-0063){#c1-note-0063a} It has also
become possible for end users to take over the design process from the
beginning, which has become considerably easier with the advent of
specialized platforms for exchanging knowledge, alongside semi-automated
production tools such as mechanical mills and 3D printers.
Digitalization, which has allowed all content to be processed, and
networking, which has created an endless amount of content ("raw
material"), have turned appropriation and recombination into general
methods of cultural production.[^64^](#c1-note-0064){#c1-note-0064a}
This phenomenon will be examined more closely in the next chapter.
Both the involvement of users in the production process and the methods
of appropriation and recombination are extremely information-intensive
and communication-intensive. Without the corresponding technological
infrastructure, neither could be achieved efficiently or on a large
scale. This was evident in the 1970s, when such approaches never made it
beyond subcultures and conceptual studies. With today\'s search engines,
every single user can trawl through an amount of information that, just
a generation ago, would have been unmanageable even by professional
archivists. A broad array of communication platforms (together with
flexible production capacities and efficient logistics) not only weakens
the contradiction between mass fabrication and personalization; it also
allows users to network directly with one another in order to develop
specialized knowledge together and thus to enable themselves to
intervene directly in design processes, both as []{#Page_40
type="pagebreak" title="40"}willing participants in and as critics of
flexible global production processes.
:::
:::
::: {.section}
The Technologization of Culture {#c1-sec-0009}
-------------------------------
That society is dependent on complex information technologies in order
to organize its constitutive processes is, in itself, nothing new.
Rather, this began as early as the late nineteenth century. It is
directly correlated with the expansion and acceleration of the
circulation of goods, which came about through industrialization. As the
historian and sociologist James Beniger has noted, this led to a
"control crisis," for administrative control centers were faced with the
problem of losing sight of what was happening in their own factories,
with their suppliers, and in the important markets of the time.
Management was in a bind: decisions had to be made either on the basis
of insufficient information or too late. The existing administrative and
control mechanisms could no longer deal with the rapidly increasing
complexity and time-sensitive nature of extensively organized production
and distribution. The office became more important, and ever more people
were needed there to fulfill a growing number of functions. Yet this was
not enough for the crisis to subside. The old administrative methods,
which involved manual information processing, simply could no longer
keep up. The crisis reached its first dramatic peak in 1889 in the
United States, with the realization that the census data from the year
1880 had not yet been analyzed when the next census was already
scheduled to take place during the subsequent year. In the same year,
the Secretary of the Interior organized a conference to investigate
faster methods of data processing. Two methods were tested for making
manual labor more efficient, one of which had the potential to achieve
greater efficiency by means of novel data-processing machines. The
latter system emerged as the clear victor; developed by an engineer
named Hermann Hollerith, it mechanically processed and stored data on
punch cards. The idea was based on Hollerith\'s observations of the
coupling and decoupling of railroad cars, which he interpreted as
modular units that could be combined in any desired order. The punch
card transferred this approach to information []{#Page_41
type="pagebreak" title="41"}management. Data were no longer stored in
fixed, linear arrangements (tables and lists) but rather in small units
(the punch cards) that, like railroad cars, could be combined in any
given way. The increase in efficiency -- with respect to speed *and*
flexibility -- was enormous, and nearly a hundred of Hollerith\'s
machines were used by the Census
Bureau.[^65^](#c1-note-0065){#c1-note-0065a} This marked a turning point
in the history of information processing, with technical means no longer
being used exclusively to store data, but to process data as well. This
was the only way to avoid the impending crisis, ensuring that
bureaucratic management could maintain centralized control. Hollerith\'s
machines proved to be a resounding success and were implemented in many
more branches of government and corporate administration, where
data-intensive processes had increased so rapidly they could not have
been managed without such machines. This growth was accompanied by that
of Hollerith\'s Tabulating Machine Company, which he founded in 1896 and
which, after a number of mergers, was renamed in 1924 as the
International Business Machines Corporation (IBM). Throughout the
following decades, dependence on information-processing machines only
deepened. The growing number of social, commercial, and military
processes could only be managed by means of information technology. This
largely took place, however, outside of public view, namely in the
specialized divisions of large government and private organizations.
These were the only institutions in command of the necessary resources
for operating the complex technical infrastructure -- so-called
mainframe computers -- that was essential to automatic information
processing.
::: {.section}
### The independent media {#c1-sec-0010}
As with so much else, this situation began to change in the 1960s. Mass
media and information-processing technologies began to attract
criticism, even though all of the involved subcultures, media activists,
and hackers continued to act independently from one another until the
1990s. The freedom-oriented social movements of the 1960s began to view
the mass media as part of the political system against which they were
struggling. The connections among the economy, politics, and the media
were becoming more apparent, not []{#Page_42 type="pagebreak"
title="42"}least because many mass media companies, especially those in
Germany related to the Springer publishing house, were openly inimical
to these social movements. Critical theories arose that, borrowing
Louis Althusser\'s influential term, regarded the media as part of the
"ideological state apparatus"; that is, as one of the authorities whose
task is to influence people to accept social relations to such a degree
that the "repressive state apparatuses" (the police, the military, etc.)
form a constant background in everyday
life.[^66^](#c1-note-0066){#c1-note-0066a} Similarly influential,
Antonio Gramsci\'s theory of "cultural hegemony" emphasized the
condition in which the governed are manipulated to form a cultural
consensus with the ruling class; they accept the latter\'s
presuppositions (and the politics which are thus justified) even though,
by doing so, they are forced to suffer economic
disadvantages.[^67^](#c1-note-0067){#c1-note-0067a} Guy Debord and the
Situationists attributed to the media a central role in the new form of
rule known as "the spectacle," the glittery surfaces and superficial
manifestations of which served to conceal society\'s true
relations.[^68^](#c1-note-0068){#c1-note-0068a} In doing so, they
aligned themselves with the critique of the "culture industry," which
had been formulated by Max Horkheimer and Theodor W. Adorno at the
beginning of the 1940s and had become a widely discussed key text by the
1960s.
Their differences aside, these perspectives were united in that they no
longer understood the "public" as a neutral sphere, in which citizens
could inform themselves freely and form their opinions, but rather as
something that was created with specific intentions and consequences.
From this grew an interest in "counter-publics"; that is, in forums
where other actors could appear and negotiate theories of their own. The
mass media thus became an important instrument for organizing the
bourgeois--capitalist public, but they were also responsible for the
development of alternatives. Media, according to one of the core ideas
of these new approaches, are less a sphere in which an external reality
is depicted; rather, they are themselves a constitutive element of
reality.
:::
::: {.section}
### Media as lifeworlds {#c1-sec-0011}
Another branch of new media theories, that of Marshall McLuhan and the
Toronto School of Communication,[^69^](#c1-note-0069){#c1-note-0069a}
[]{#Page_43 type="pagebreak" title="43"}reached a similar conclusion on
different grounds. In 1964, McLuhan aroused a great deal of attention
with his slogan "the medium is the message." He maintained that every
medium of communication, by means of its media-specific characteristics,
directly affected the consciousness, self-perception, and worldview of
every individual.[^70^](#c1-note-0070){#c1-note-0070a} This, he
believed, happens independently of and in addition to whatever specific
message a medium might be conveying. From this perspective, reality does
not exist outside of media, given that media codetermine our personal
relation to and behavior in the world. For McLuhan and the Toronto
School, media were thus not channels for transporting content but rather
the all-encompassing environments -- galaxies -- in which we live.
Such ideas were circulating much earlier and were intensively developed
by artists, many of whom were beginning to experiment with new
electronic media. An important starting point in this regard was the
1963 exhibit *Exposition of Music -- Electronic Television* by the
Korean artist Nam June Paik, who was then collaborating with Karlheinz
Stockhausen in Düsseldorf. Among other things, Paik presented 12
television sets, the screens of which were "distorted" by magnets. Here,
however, "distorted" is a problematic term, for, as Paik explicitly
noted, the electronic images were "a beautiful slap in the face of
classic dualism in philosophy since the time of Plato. \[...\] Essence
AND existence, essentia AND existentia. In the case of the electron,
however, EXISTENTIA IS ESSENTIA."[^71^](#c1-note-0071){#c1-note-0071a}
Paik no longer understood the electronic image on the television screen
as a portrayal or representation of anything. Rather, it engendered in
the moment of its appearance an autonomous reality beyond and
independent of its representational function. A whole generation of
artists began to explore forms of existence in electronic media, which
they no longer understood as pure media of information. In his work
*Video Corridor* (1969--70), Bruce Nauman stacked two monitors at the
end of a corridor that was approximately 10 meters long but only 50
centimeters wide. On the lower monitor ran a video showing the empty
hallway. The upper monitor displayed an image captured by a camera
installed at the entrance of the hall, about 3 meters high. If the
viewer moved down the corridor toward the two []{#Page_44
type="pagebreak" title="44"}monitors, he or she would thus be recorded
by the latter camera. Yet the closer one came to the monitor, the
farther one would be from the camera, so that one\'s image on the
monitor would become smaller and smaller. Recorded from behind, viewers
would thus watch themselves walking away from themselves. Surveillance
by others, self-surveillance, recording, and disappearance were directly
and intuitively connected with one another and thematized as fundamental
issues of electronic media.
Toward the end of the 1960s, the easier availability and mobility of
analog electronic production technologies promoted the search for
counter-publics and the exploration of media as comprehensive
lifeworlds. In 1967, Sony introduced its first Portapak system: a
battery-powered, self-contained recording system -- consisting of a
camera, a cord, and a recorder -- with which it was possible to make
(black-and-white) video recordings outside of a studio. Although the
recording apparatus, which required additional devices for editing and
projection, was offered at the relatively expensive price of \$1,500
(which corresponds to about €8,000 today), it was still affordable for
interested groups. Compared with the situation of traditional film
cameras, these new cameras considerably lowered the initial hurdle for
media production, for video tapes were not only much cheaper than film
reels (and could be used for multiple recordings); they also made it
possible to view recorded material immediately and on location. This
enabled the production of works that were far more intuitive and
spontaneous than earlier ones. The 1970s saw the formation of many video
groups, media workshops, and other initiatives for the independent
production of electronic media. Through their own distribution,
festivals, and other channels, such groups created alternative public
spheres. The latter became especially prominent in the United States
where, at the end of the 1960s, the providers of cable networks were
legally obligated to establish public-access channels, on which citizens
were able to operate self-organized and non-commercial television
programs. This gave rise to a considerable public-access movement there,
which at one point extended across 4,000 cities and was responsible for
producing programs from and for these different
communities.[^72[]{#Page_45 type="pagebreak"
title="45"}^](#c1-note-0072){#c1-note-0072a}
What these initiatives shared in common, in Western Europe and the
United States, was their attempt to close the gap between the
consumption and production of media, to activate the public, and at
least in part to experiment with the media themselves. Non-professional
producers were empowered with the ability to control who told their
stories and how this happened. Groups that previously had no access to
the medial public sphere now had opportunities to represent themselves
and their own interests. By working together on their own productions,
such groups demystified the medium of television and simultaneously
equipped it with a critical consciousness.
Especially well received in Germany was the work of Hans Magnus
Enzensberger, who in 1970 argued (on the basis of Bertolt Brecht\'s
radio theory) in favor of distinguishing between "repressive" and
"emancipatory" uses of media. For him, the emancipatory potential of
media lay in the fact that "every receiver is \[...\] a potential
transmitter" that can participate "interactively" in "collective
production."[^73^](#c1-note-0073){#c1-note-0073a} In the same year, the
first German video group, Telewissen, debuted in public with a
demonstration in downtown Darmstadt. In 1980, at the peak of the
movement for independent video production, there were approximately a
hundred such groups throughout (West) Germany. The lack of distribution
channels, however, represented a nearly insuperable obstacle and ensured
that many independent productions were seldom viewed outside of
small-scale settings. Tapes had to be exchanged between groups through
the mail, and they were mainly shown at gatherings and events, and in
bars. The dynamic of alternative media shifted toward a small subculture
(though one networked throughout all of Europe) of pirate radio and
television broadcasters. At the beginning of the 1980s and in the space
of Radio Dreyeckland in Freiburg, which had been founded in 1977 as
Radio Verte Fessenheim, operations began at Germany\'s first pirate or
citizens\' radio station, which regularly broadcast information about
the political protest movements that had arisen against the use of
nuclear power in Fessenheim (France), Wyhl (Germany), and Kaiseraugst
(Switzerland). The epicenter of the scene, however, was located in
Amsterdam, where the group known as Rabotnik TV, which was an offshoot
[]{#Page_46 type="pagebreak" title="46"}of the squatter scene there,
would illegally feed its signal through official television stations
after their programming had ended at night (many stations then stopped
broadcasting at midnight). In 1988, the group acquired legal
broadcasting slots on the cable network and reached up to 50,000 viewers
with their weekly experimental shows, which largely consisted of footage
appropriated freely from elsewhere.[^74^](#c1-note-0074){#c1-note-0074a}
Early in 1990, the pirate television station Kanal X was created in
Leipzig; it produced its own citizens\' television programming in the
quasi-lawless milieu of the GDR before
reunification.[^75^](#c1-note-0075){#c1-note-0075a}
These illegal, independent, or public-access stations only managed to
establish themselves as real mass media to a very limited extent.
Nevertheless, they played an important role in sensitizing an entire
generation of media activists, whose opportunities expanded as the means
of production became both better and cheaper. In the name of "tactical
media," a new generation of artistic and political media activists came
together in the middle of the
1990s.[^76^](#c1-note-0076){#c1-note-0076a} They combined the "camcorder
revolution," which in the late 1980s had made video equipment available
to broader swaths of society, stirring visions of democratic media
production, with the newly arrived medium of the internet. Despite still
struggling with numerous technical difficulties, they remained constant
in their belief that the internet would solve the hitherto intractable
problem of distributing content. The transition from analog to digital
media lowered the production hurdle yet again, not least through the
ongoing development of improved software. Now, many stages of production
that had previously required professional or semi-professional expertise
and equipment could also be carried out by engaged laymen. As a
consequence, the focus of interest broadened to include not only the
development of alternative production groups but also the possibility of
a flexible means of rapid intervention in existing structures. Media --
both television and the internet -- were understood as environments in
which one could act without directly representing a reality outside of
the media. Television was analyzed down to its own legalities, which
could then be manipulated to affect things beyond the media.
Increasingly, culture jamming and the campaigns of so-called
communication guerrillas were blurring the difference between media and
political activity.[^77[]{#Page_47 type="pagebreak"
title="47"}^](#c1-note-0077){#c1-note-0077a}
This difference was dissolved entirely by a new generation of
politically motivated artists, activists, and hackers, who transferred
the tactics of civil disobedience -- blockading a building with a
sit-in, for instance -- to the
internet.[^78^](#c1-note-0078){#c1-note-0078a} When, in 1994, the
Zapatista Army of National Liberation rose up in the south of Mexico,
several media projects were created to support its mostly peaceful
opposition and to make the movement known in Europe and North America.
As part of this loose network, in 1998 the American artist collective
Electronic Disturbance Theater developed a relatively simple computer
program called FloodNet that enabled networked sympathizers to shut down
websites, such as those of the Mexican government, in a targeted and
temporary manner. The principle was easy enough: the program would
automatically reload a certain website over and over again in order to
exhaust the capacities of its network
servers.[^79^](#c1-note-0079){#c1-note-0079a} The goal was not to
destroy data but rather to disturb the normal functioning of an
institution in order to draw attention to the activities and interests
of the protesters.
:::
::: {.section}
### Networks as places of action {#c1-sec-0012}
What this new generation of media activists shared in common with the
hackers and pioneers of computer networks was the idea that
communication media are spaces for agency. During the 1960s, these
programmers were also in search of alternatives. The difference during
the 1960s is that they did not pursue these alternatives in
counter-publics, but rather in alternative lifestyles and communication.
The rejection of bureaucracy as a form of social organization played a
significant role in the critique of industrial society formulated by
freedom-oriented social movements. At the beginning of the previous
century, Max Weber had still regarded bureaucracy as a clear sign of
progress toward a rational and methodical
organization.[^80^](#c1-note-0080){#c1-note-0080a} He based this
assessment on processes that were impersonal, rule-bound, and
transparent (in the sense that they were documented with files). But
now, in the 1960s, bureaucracy was being criticized as soulless,
alienated, oppressive, non-transparent, and unfit for an increasingly
complex society. Whereas the first four of these points are in basic
agreement with Weber\'s thesis about "disenchanting" []{#Page_48
type="pagebreak" title="48"}the world, the last point represents a
radical departure from his analysis. Bureaucracies were no longer
regarded as hyper-efficient but rather as inefficient, and their size
and rule-bound nature were no longer seen as strengths but rather as
decisive weaknesses. The social bargain of offering prosperity and
security in exchange for subordination to hierarchical relations struck
many as being anything but attractive, and what blossomed instead was a
broad interest in alternative forms of coexistence. New institutions
were expected to be more flexible and more open. The desire to step away
from the system was widespread, and many (mostly young) people set about
doing exactly that. Alternative ways of life -- communes, shared
apartments, and cooperatives -- were explored in the country and in
cities. They were meant to provide the individual with greater autonomy
and the opportunity to develop his or her own unique potential. Despite
all of the differences between these concepts of life, they nevertheless
shared something of a common denominator: the promise of
reconceptualizing social institutions and the fundamentals of
coexistence, with the aim of reformulating them in such a way as to
allow everyone\'s personal potential to develop fully in the here and
now.
According to critics of such alternatives, bureaucracy was necessary in
order to organize social life as it radically reduced the world\'s
complexity by forcing it through the bottleneck of official procedures.
However, the price paid for such efficiency involved the atrophying of
human relationships, which had to be subordinated to rigid processes
that were incapable of registering unique characteristics and
differences and were unable to react in a timely manner to changing
circumstances.
In the 1960s, many countercultural attempts to find new forms of
organization placed personal and open communication at the center of
their efforts. Each individual was understood as a singular person with
untapped potential rather than a carrier of abstract and clearly defined
functions. It was soon realized, however, that every common activity and
every common decision entailed processes that were time-intensive and
communication-intensive. As soon as a group exceeded a certain size, it
became practically impossible for it to reach any consensus. As a result
of these experiences, an entire worldview emerged that propagated
"smallness" as a central []{#Page_49 type="pagebreak" title="49"}value
("small is beautiful"). It was thought that in this way society might
escape from bureaucracy with its ostensibly disastrous consequences for
humanity and the environment.[^81^](#c1-note-0081){#c1-note-0081a} But
this belief did not last for long. For, unlike the majority of European
alternative movements, the counterculture in the United States was not
overwhelmingly critical of technology. On the contrary, many actors
there sought suitable technologies for solving the practical problems of
social organization. At the end of the 1960s, a considerable amount of
attention was devoted to the field of basic technological research. This
field brought together the interests of the military, academics,
businesses, and activists from the counterculture. The common ground for
all of them was a cybernetic vision of institutions, or, in the words of
the historian Fred Turner:
::: {.extract}
a picture of humans and machines as dynamic, collaborating elements in a
single, highly fluid, socio-technical system. Within that system,
control emerged not from the mind of a commanding officer, but from the
complex, probabilistic interactions of humans, machines and events
around them. Moreover, the mechanical elements of the system in question
-- in this case, the predictor -- enabled the human elements to achieve
what all Americans would agree was a worthwhile goal. \[...\] Over the
coming decades, this second vision of benevolent man-machine systems, of
circular flows of information, would emerge as a driving force in the
establishment of the military--industrial--academic complex and as a
model of an alternative to that
complex.[^82^](#c1-note-0082){#c1-note-0082a}
:::
This complex was possible because, as a theory, cybernetics was
formulated in extraordinarily abstract terms, so much so that a whole
variety of competing visions could be associated with
it.[^83^](#c1-note-0083){#c1-note-0083a} With cybernetics as a
meta-science, it was possible to investigate the common features of
technical, social, and biological
processes.[^84^](#c1-note-0084){#c1-note-0084a} They were analyzed as
open, interactive, and information-processing systems. It was especially
consequential that cybernetics defined control and communication as the
same thing, namely as activities oriented toward informational
feedback.[^85^](#c1-note-0085){#c1-note-0085a} The heterogeneous legacy
of cybernetics and its synonymous treatment of the terms "communication"
and "control" continue to influence information technology and the
internet today.[]{#Page_50 type="pagebreak" title="50"}
The various actors who contributed to the development of the internet
shared a common interest for forms of organization based on the
comprehensive, dynamic, and open exchange of information. Both on the
micro and macro level (and this is decisive at this point),
decentralized and flexible communication technologies were meant to
become the foundation of new organizational models. Militaries feared
attacks on their command and communication centers; academics wanted to
broaden their culture of autonomy, collaboration among peers, and the
free exchange of information; businesses were looking for new areas of
activity; and countercultural activists were longing for new forms of
peaceful coexistence.[^86^](#c1-note-0086){#c1-note-0086a} They all
rejected the bureaucratic model, and the counterculture provided them
with the central catchword for their alternative vision: community.
Though rather difficult to define, it was a powerful and positive term
that somehow promised the opposite of bureaucracy: humanity,
cooperation, horizontality, mutual trust, and consensus. Now, however,
humanity was expected to be reconfigured as a community in cooperation
with and inseparable from machines. And what was yearned for had become
a liberating symbiosis of man and machine, an idea that the author
Richard Brautigan was quick to mock in his poem "All Watched Over by
Machines of Loving Grace" from 1967:
Here, Brautigan is ridiculing both the impatience (*the sooner the
better!*) and the naïve optimism (*harmony, clear sky*) of the
countercultural activists. Primarily, he regarded the underlying vision
as an innocent but amusing fantasy and not as a potential threat against
which something had to be done. And there were also reasons to believe
that, ultimately, the new communities would be free from the coercive
nature that []{#Page_51 type="pagebreak" title="51"}had traditionally
characterized the downside of community experiences. It was thought that
the autonomy and freedom of the individual could be regained in and by
means of the community. The conditions for this were that participation
in the community had to be voluntary and that the rules of participation
had to be self-imposed. I will return to this topic in greater detail
below.
In line with their solution-oriented engineering culture and the
results-focused military funders who by and large set the agenda, a
relatively small group of computer scientists now took it upon
themselves to establish the technological foundations for new
institutions. This was not an abstract goal for the distant future;
rather, they wanted to change everyday practices as soon as possible. It
was around this time that advanced technology became the basis of social
communication, which now adopted forms that would have been
inconceivable (not to mention impracticable) without these
preconditions. Of course, effective communication technologies already
existed at the time. Large corporations had begun long before then to
operate their own computing centers. In contrast to the latter, however,
the new infrastructure could also be used by individuals outside of
established institutions and could be implemented for all forms of
communication and exchange. This idea gave rise to a pragmatic culture
of horizontal, voluntary cooperation. The clearest summary of this early
ethos -- which originated at the unusual intersection of military,
academic, and countercultural interests -- was offered by David D.
Clark, a computer scientist who for some time coordinated the
development of technical standards for the internet: "We reject: kings,
presidents and voting. We believe in: rough consensus and running
code."[^88^](#c1-note-0088){#c1-note-0088a}
All forms of classical, formal hierarchies and their methods for
resolving conflicts -- commands (by kings and presidents) and votes --
were dismissed. Implemented in their place was a pragmatics of open
cooperation that was oriented around two guiding principles. The first
was that different views should be discussed without a single individual
being able to block any final decisions. Such was the meaning of the
expression "rough consensus." The second was that, in accordance with
the classical engineering tradition, the focus should remain on concrete
solutions that had to be measured against one []{#Page_52
type="pagebreak" title="52"}another on the basis of transparent
criteria. Such was the meaning of the expression "running code." In
large part, this method was possible because the group oriented around
these principles was, internally, relatively homogeneous: it consisted
of top-notch computer scientists -- all of them men -- at respected
American universities and research centers. For this very reason, many
potential and fundamental conflicts were avoided, at least at first.
This internal homogeneity lends rather dark undertones to their sunny
vision, but this was hardly recognized at the time. Today these
undertones are far more apparent, and I will return to them below.
Not only were technical protocols developed on the basis of these
principles, but organizational forms as well. Along with the Internet
Engineering Task Force (which he directed), Clark created the so-called
Request-for-Comments documents, with which ideas could be presented to
interested members of the community and simultaneous feedback could be
collected in order to work through the ideas in question and thus reach
a rough consensus. If such a consensus could not be reached -- if, for
instance, an idea failed to resonate with anyone or was too
controversial -- then the matter would be dropped. The feedback was
organized as a form of many-to-many communication through email lists,
newsgroups, and online chat systems. This proved to be so effective that
horizontal communication within large groups or between multiple groups
could take place without resulting in chaos. This therefore invalidated
the traditional trend that social units, once they reach a certain size,
would necessarily introduce hierarchical structures for the sake of
reducing complexity and communication. In other words, the foundations
were laid for larger numbers of (changing) people to organize flexibly
and with the aim of building an open consensus. For Manuel Castells,
this combination of organizational flexibility and scalability in size
is the decisive innovation that was enabled by the rise of the network
society.[^89^](#c1-note-0089){#c1-note-0089a} At the same time, however,
this meant that forms of organization spread that could only be possible
on the basis of technologies that have formed (and continue to form)
part of the infrastructure of the internet. Digital technology and the
social activity of individual users were linked together to an
unprecedented extent. Social and cultural agendas were now directly
related []{#Page_53 type="pagebreak" title="53"}to and entangled with
technical design. Each of the four original interest groups -- the
military, scientists, businesses, and the counterculture -- implemented
new technologies to pursue their own projects, which partly complemented
and partly contradicted one another. As we know today, the first three
groups still cooperate closely with each other. To a great extent, this
has allowed the military and corporations, which are willingly supported
by researchers in need of funding, to determine the technology and thus
aspects of the social and cultural agendas that depend on it.
The software developers\' immediate environment experienced its first
major change in the late 1970s. Software, which for many had been a mere
supplement to more expensive and highly specialized hardware, became a
marketable good with stringent licensing restrictions. A new generation
of businesses, led by Bill Gates, suddenly began to label cooperation
among programmers as theft.[^90^](#c1-note-0090){#c1-note-0090a}
Previously it had been par for the course, and above all necessary, for
programmers to share software with one another. The former culture of
horizontal cooperation between developers transformed into a
hierarchical and commercially oriented relation between developers and
users (many of whom, at least at the beginning, had developed programs
of their own). For the first time, copyright came to play an important
role in digital culture. In order to survive in this environment, the
practice of open cooperation had to be placed on a new legal foundation.
Copyright law, which served to separate programmers (producers) from
users (consumers), had to be neutralized or circumvented. The first step
in this direction was taken in 1984 by the activist and programmer
Richard Stallman. Composed by Stallman, the GNU General Public License
was and remains a brilliant hack that uses the letter of copyright law
against its own spirit. This happens in the form of a license that
defines "four freedoms":
1. The freedom to run the program as you wish, for any purpose (freedom
0).
2. The freedom to study how the program works and change it so it does
your computing as you wish (freedom 1).
3. The freedom to redistribute copies so you can help your neighbor
(freedom 2).[]{#Page_54 type="pagebreak" title="54"}
4. The freedom to distribute copies of your modified versions to others
(freedom 3). By doing this you can give the whole community a chance
to benefit from your changes.[^91^](#c1-note-0091){#c1-note-0091a}
Thanks to this license, people who were personally unacquainted and did
not share a common social environment could now cooperate (freedoms 2
and 3) and simultaneously remain autonomous and unrestricted (freedoms 0
and 1). For many, the tension between the need to develop complex
software in large teams and the desire to maintain one\'s own autonomy
represented an incentive to try out new forms of
cooperation.[^92^](#c1-note-0092){#c1-note-0092a}
Stallman\'s influence was at first limited to a small circle of
programmers. In the middle of the 1980s, the goal of developing a
completely free operating system seemed a distant one. Communication
between those interested in doing so was often slow and complicated. In
part, program codes still had to be sent by mail. It was not until the
beginning of the 1990s that students in technical departments at many
universities could access the
internet.[^93^](#c1-note-0093){#c1-note-0093a} One of the first to use
these new opportunities in an innovative way was a Finnish student named
Linus Torvalds. He built upon Stallman\'s work and programmed a kernel,
which, as the most important module of an operating system, governs the
interaction between hardware and software. He published the first free
version of this in 1991 and encouraged anyone interested to give him
feedback.[^94^](#c1-note-0094){#c1-note-0094a} And it poured in.
Torvalds reacted promptly and issued new versions of his software in
quick succession. Instead of understanding his software as a finished
product, he treated it like an open-ended process. This, in turn,
motivated even more developers to participate, because they saw that
their contributions were being adopted swiftly, which led to the
formation of an open community of interested programmers who swapped
ideas over the internet and continued writing software. In order to
maintain an overview of the different versions of the program, which
appeared in parallel with one another, it soon became necessary to
employ specialized platforms. The fusion of social processes --
horizontal and voluntary cooperation among developers -- and
technological platforms, which enabled this form of cooperation
[]{#Page_55 type="pagebreak" title="55"}by providing archives, filter
functions, and search capabilities that made it possible to organize
large amounts of data, was thus advanced even further. The programmers
were no longer primarily working on the development of the internet
itself, which by then was functioning quite reliably, but were rather
using the internet to apply their cooperative principles to other
arenas. By the end of the 1990s, the free-software movement had
established a new, internet-based form of organization and had
demonstrated its efficiency in practice: horizontal, informal
communities of actors -- voluntary, autonomous, and focused on a common
interest -- that, on the basis of high-tech infrastructure, could
include thousands of people without having to create formal hierarchies.
:::
:::
::: {.section}
From the Margins to the Center of Society {#c1-sec-0013}
-----------------------------------------
It was around this same time that the technologies in question, which
were already no longer very new, entered mainstream society. Within a
few years, the internet became part of everyday life. Three years before
the turn of the millennium, only about 6 percent of the entire German
population used the internet, often only occasionally. Three years after
the millennium, the number of users already exceeded 53 percent. Since
then, this share has increased even further. In 2014, it was more than
97 percent for people under the age of
40.[^95^](#c1-note-0095){#c1-note-0095a} Parallel to these developments,
data transfer rates increased considerably, broadband connections ousted
the need for dial-up modems, and the internet was suddenly "here" and no
longer "there." With the spread of mobile devices, especially since the
year 2007 when the first iPhone was introduced, digital communication
became available both extensively and continuously. Since then, the
internet has been ubiquitous. The amount of time that users spend online
has increased and, with the rapid ascent of social mass media such as
Facebook, people have been online in almost every situation and
circumstance in life.[^96^](#c1-note-0096){#c1-note-0096a} The internet,
like water or electricity, has become for many people a utility that is
simply taken for granted.
In a BBC survey from 2010, 80 percent of those polled believed that
internet access -- a precondition for participating []{#Page_56
type="pagebreak" title="56"}in the now dominant digital condition --
should be regarded as a fundamental human right. This idea was most
popular in South Korea (96 percent) and Mexico (94 percent), while in
Germany at least 72 percent were of the same
opinion.[^97^](#c1-note-0097){#c1-note-0097a}
On the basis of this new infrastructure, which is now relevant in all
areas of life, the cultural developments described above have been
severed from the specific historical conditions from which they emerged
and have permeated society as a whole. Expressivity -- the ability to
communicate something "unique" -- is no longer a trait of artists and
knowledge workers alone, but rather something that is required by an
increasingly broader stratum of society and is already being taught in
schools. Users of social mass media must produce (themselves). The
development of specific, differentiated identities and the demand that
each be treated equally are no longer promoted exclusively by groups who
have to struggle against repression, existential threats, and
marginalization, but have penetrated deeply into the former mainstream,
not least because the present forms of capitalism have learned to profit
from the spread of niches and segmentation. When even conservative
parties have abandoned the idea of a "leading culture," then cultural
differences can no longer be classified by enforcing an absolute and
indisputable hierarchy, the top of which is occupied by specific
(geographical and cultural) centers. Rather, a space has been opened up
for endless negotiations, a space in which -- at least in principle --
everything can be called into question. This is not, of course, a
peaceful and egalitarian process. In addition to the practical hurdles
that exist in polarizing societies, there are also violent backlashes
and new forms of fundamentalism that are attempting once again to remove
certain religious, social, cultural, or political dimensions of
existence from the discussion. Yet these can only be understood in light
of a sweeping cultural transformation that has already reached
mainstream society.[^98^](#c1-note-0098){#c1-note-0098a} In other words,
the digital condition has become quotidian and dominant. It forms a
cultural constellation that determines all areas of life, and its
characteristic features are clearly recognizable. These will be the
focus of the next chapter.[]{#Page_57 type="pagebreak" title="57"}
:::
::: {.section .notesList}
[1](#c1-note-0001a){#c1-note-0001} Kathrin Passig and Sascha Lobo,
*Internet: Segen oder Fluch* (Berlin: Rowohlt, 2012) \[--trans.\].
[2](#c1-note-0002a){#c1-note-0002} The expression "heteronormatively
behaving" is used here to mean that, while in the public eye, the
behavior of the people []{#Page_177 type="pagebreak" title="177"}in
question conformed to heterosexual norms regardless of their personal
sexual orientations.
[3](#c1-note-0003a){#c1-note-0003} No order is ever entirely closed
off. In this case, too, there was also room for exceptions and for
collective moments of greater cultural multiplicity. That said, the
social openness of the end of the 1920s, for instance, was restricted to
particular milieus within large cities and was accordingly short-lived.
[4](#c1-note-0004a){#c1-note-0004} Fritz Machlup, *The Political
Economy of Monopoly: Business, Labor and Government Policies*
(Baltimore, MD: The Johns Hopkins University Press, 1952).
[5](#c1-note-0005a){#c1-note-0005} Machlup was a student of Ludwig von
Mises, the most influential representative of this radically
individualist school. See Hans-Hermann Hoppe, "Die Österreichische
Schule und ihre Bedeutung für die moderne Wirtschaftswissenschaft," in
Karl-Dieter Grüske (ed.), *Die Gemeinwirtschaft: Kommentarband zur
Neuauflage von Ludwig von Mises' "Die Gemeinwirtschaft"* (Düsseldorf:
Verlag Wirtschaft und Finanzen, 1996), pp. 65--90.
[6](#c1-note-0006a){#c1-note-0006} Fritz Machlup, *The Production and
Distribution of Knowledge in the United States* (New York: John Wiley &
Sons, 1962).
[7](#c1-note-0007a){#c1-note-0007} The term "knowledge worker" had
already been introduced to the discussion a few years before; see Peter
Drucker, *Landmarks of Tomorrow: A Report on the New* (New York: Harper,
1959).
[8](#c1-note-0008a){#c1-note-0008} Peter Ecker, "Die
Verwissenschaftlichung der Industrie: Zur Geschichte der
Industrieforschung in den europäischen und amerikanischen
Elektrokonzernen 1890--1930," *Zeitschrift für Unternehmensgeschichte*
35 (1990): 73--94.
[9](#c1-note-0009a){#c1-note-0009} Edward Bernays was the son of
Sigmund Freud\'s sister Anna and Ely Bernays, the brother of Freud\'s
wife, Martha Bernays.
[10](#c1-note-0010a){#c1-note-0010} Edward L. Bernays, *Propaganda*
(New York: Horace Liverlight, 1928).
[11](#c1-note-0011a){#c1-note-0011} James Beniger, *The Control
Revolution: Technological and Economic Origins of the Information
Society* (Cambridge, MA: Harvard University Press, 1986), p. 350.
[12](#c1-note-0012a){#c1-note-0012} Norbert Wiener, *Cybernetics: Or
Control and Communication in the Animal and the Machine* (New York: J.
Wiley, 1948).
[13](#c1-note-0013a){#c1-note-0013} Daniel Bell, *The Coming of
Post-Industrial Society: A Venture in Social Forecasting* (New York:
Basic Books, 1973).
[14](#c1-note-0014a){#c1-note-0014} Simon Nora and Alain Minc, *The
Computerization of Society: A Report to the President of France*
(Cambridge, MA: MIT Press, 1980).
[15](#c1-note-0015a){#c1-note-0015} Manuel Castells, *The Rise of the
Network Society* (Oxford: Blackwell, 1996).
[16](#c1-note-0016a){#c1-note-0016} Hans-Dieter Kübler, *Mythos
Wissensgesellschaft: Gesellschaftlicher Wandel zwischen Information,
Medien und Wissen -- Eine Einführung* (Wiesbaden: Verlag für
Sozialwissenschaften, 2009).[]{#Page_178 type="pagebreak" title="178"}
[17](#c1-note-0017a){#c1-note-0017} Luc Boltanski and Ève Chiapello,
*The New Spirit of Capitalism*, trans. Gregory Elliott (London: Verso,
2005).
[18](#c1-note-0018a){#c1-note-0018} Michael Piore and Charles Sabel,
*The Second Industrial Divide: Possibilities of Prosperity* (New York:
Basic Books, 1984).
[19](#c1-note-0019a){#c1-note-0019} Castells, *The Rise of the Network
Society*. For a critical evaluation of Castells\'s work, see Felix
Stalder, *Manuel Castells and the Theory of the Network Society*
(Cambridge: Polity, 2006).
[20](#c1-note-0020a){#c1-note-0020} "UK Creative Industries Mapping
Documents" (1998); quoted from Terry Flew, *The Creative Industries:
Culture and Policy* (Los Angeles, CA: Sage, 2012), pp. 9--10.
[21](#c1-note-0021a){#c1-note-0021} The rise of the creative
industries, and the hope that they inspired among politicians, did not
escape criticism. Among the first works to draw attention to the
precarious nature of working in such industries was Angela McRobbie\'s
*British Fashion Design: Rag Trade or Image Industry?* (New York:
Routledge, 1998).
[22](#c1-note-0022a){#c1-note-0022} This definition is not without a
degree of tautology, given that economic growth is based on talent,
which itself is defined by its ability to create new jobs; that is,
economic growth. At the same time, he employs the term "talent" in an
extremely narrow sense. Apparently, if something has nothing to do with
job creation, it also has nothing to do with talent or creativity. All
forms of creativity are thus measured and compared according to a common
criterion.
[23](#c1-note-0023a){#c1-note-0023} Richard Florida, *Cities and the
Creative Class* (New York: Routledge, 2005), p. 5.
[24](#c1-note-0024a){#c1-note-0024} One study has reached the
conclusion that, despite mass participation, "a new form of
communicative elite has developed, namely digitally and technically
versed actors who inform themselves in this way, exchange ideas and thus
gain influence. For them, the possibilities of platforms mainly
represent an expansion of useful tools. Above all, the dissemination of
digital technology makes it easier for versed and highly networked
individuals to convey their news more simply -- and, for these groups of
people, it lowers the threshold for active participation." Michael
Bauer, "Digitale Technologien und Partizipation," in Clara Landler et
al. (eds), *Netzpolitik in Österreich: Internet, Macht, Menschenrechte*
(Krems: Donau-Universität Krems, 2013), pp. 219--24, at 224
\[--trans.\].
[25](#c1-note-0025a){#c1-note-0025} Boltanski and Chiapello, *The New
Spirit of Capitalism*.
[26](#c1-note-0026a){#c1-note-0026} According to Wikipedia,
"Heteronormativity is the belief that people fall into distinct and
complementary genders (man and woman) with natural roles in life. It
assumes that heterosexuality is the only sexual orientation or only
norm, and states that sexual and marital relations are most (or only)
fitting between people of opposite sexes."[]{#Page_179 type="pagebreak"
title="179"}
[27](#c1-note-0027a){#c1-note-0027} Jannis Plastargias, *RotZSchwul:
Der Beginn einer Bewegung (1971--1975)* (Berlin: Querverlag, 2015).
[28](#c1-note-0028a){#c1-note-0028} Helmut Ahrens et al. (eds),
*Tuntenstreit: Theoriediskussion der Homosexuellen Aktion Westberlin*
(Berlin: Rosa Winkel, 1975), p. 4.
[29](#c1-note-0029a){#c1-note-0029} Susanne Regener and Katrin Köppert
(eds), *Privat/öffentlich: Mediale Selbstentwürfe von Homosexualität*
(Vienna: Turia + Kant, 2013).
[30](#c1-note-0030a){#c1-note-0030} Such, for instance, was the
assessment of Manfred Bruns, the spokesperson for the Lesbian and Gay
Association in Germany, in his text "Schwulenpolitik früher" (link no
longer active). From today\'s perspective, however, the main problem
with this event was the unclear position of the Green Party with respect
to pedophilia. See Franz Walter et al. (eds), *Die Grünen und die
Pädosexualität: Eine bundesdeutsche Geschichte* (Göttingen: Vandenhoeck
& Ruprecht, 2014).
[32](#c1-note-0032a){#c1-note-0032} Quoted from Frank Niggemeier, "Gay
Pride: Schwules Selbstbewußtsein aus dem Village," in Bernd Polster
(ed.), *West-Wind: Die Amerikanisierung Europas* (Cologne: Dumont,
1995), pp. 179--87, at 184 \[--trans.\].
[33](#c1-note-0033a){#c1-note-0033} Quoted from Regener and Köppert,
*Privat/öffentlich*, p. 7 \[--trans.\].
[34](#c1-note-0034a){#c1-note-0034} Hans-Peter Buba and László A.
Vaskovics, *Benachteiligung gleichgeschlechtlich orientierter Personen
und Paare: Studie im Auftrag des Bundesministerium der Justiz* (Cologne:
Bundesanzeiger, 2001).
[35](#c1-note-0035a){#c1-note-0035} This process of internal
differentiation has not yet reached its conclusion, and thus the
acronyms have become longer and longer: LGBPTTQQIIAA+ stands for
"lesbian, gay, bisexual, pansexual, transgender, transsexual, queer,
questioning, intersex, intergender, asexual, ally."
[36](#c1-note-0036a){#c1-note-0036} Judith Butler, *Gender Trouble:
Feminism and the Subversion of Identity* (New York: Routledge, 1989).
[37](#c1-note-0037a){#c1-note-0037} Andreas Krass, "Queer Studies: Eine
Einführung," in Krass (ed.), *Queer denken: Gegen die Ordnung der
Sexualität* (Frankfurt am Main: Suhrkamp, 2003), pp. 7--27.
[38](#c1-note-0038a){#c1-note-0038} Edward W. Said, *Orientalism* (New
York: Vintage Books, 1978).
[39](#c1-note-0039a){#c1-note-0039} Kark August Wittfogel, *Oriental
Despotism: A Comparative Study of Total Power* (New Haven, CT: Yale
University Press, 1957).
[40](#c1-note-0040a){#c1-note-0040} Silke Förschler, *Bilder des Harem:
Medienwandel und kultereller Austausch* (Berlin: Reimer, 2010).
[41](#c1-note-0041a){#c1-note-0041} The selection and effectiveness of
these images is not a coincidence. Camel was one of the first brands of
cigarettes for []{#Page_180 type="pagebreak" title="180"}which
advertising, in the sense described above, was used in a systematic
manner.
[42](#c1-note-0042a){#c1-note-0042} This would not exclude feelings of
regret about the loss of an exotic and romantic way of life, such as
those of T. E. Lawrence, whose activities in the Near East during the
First World War were memorialized in the film *Lawrence of Arabia*
(1962).
[43](#c1-note-0043a){#c1-note-0043} Said has often been criticized,
however, for portraying orientalism so dominantly that there seems to be
no way out of the existing dependent relations. For an overview of the
debates that Said has instigated, see María do Mar Castro Varela and
Nikita Dhawan, *Postkoloniale Theorie: Eine kritische Einführung*
(Bielefeld: Transcript, 2005), pp. 37--46.
[44](#c1-note-0044a){#c1-note-0044} "Migration führt zu 'hybrider'
Gesellschaft" (an interview with Homi K. Bhabha), *ORF Science*
(November 9, 2007), online \[--trans.\].
[45](#c1-note-0045a){#c1-note-0045} Homi K. Bhabha, *The Location of
Culture* (New York: Routledge, 1994), p. 4.
[46](#c1-note-0046a){#c1-note-0046} Elisabeth Bronfen and Benjamin
Marius, "Hybride Kulturen: Einleitung zur anglo-amerikanischen
Multikulturismusdebatte," in Bronfen et al. (eds), *Hybride Kulturen*
(Tübingen: Stauffenburg), pp. 1--30, at 8 \[--trans.\].
[47](#c1-note-0047a){#c1-note-0047} "What Is Postcolonial Thinking? An
Interview with Achille Mbembe," *Eurozine* (December 2006), online.
[48](#c1-note-0048a){#c1-note-0048} Migrants have always created their
own culture, which deals in various ways with the experience of
migration itself, but non-migrant populations have long tended to ignore
this. Things have now begun to change in this regard, for instance
through Imra Ayata and Bülent Kullukcu\'s compilation of songs by the
Turkish diaspora of the 1970s and 1980s: *Songs of Gastarbeiter*
(Munich: Trikont, 2013).
[49](#c1-note-0049a){#c1-note-0049} The conference programs can be
found at: \<\>.
[50](#c1-note-0050a){#c1-note-0050} "Deutschland entwickelt sich zu
einem attraktiven Einwanderungsland für hochqualifizierte Zuwanderer,"
press release by the CDU/CSU Alliance in the German Parliament (June 4,
2014), online \[--trans.\].
[51](#c1-note-0051a){#c1-note-0051} Andreas Reckwitz, *Die Erfindung
der Kreativität: Zum Prozess gesellschaftlicher Ästhetisierung* (Berlin:
Suhrkamp, 2011), p. 180 \[--trans.\]. An English translation of this
book is forthcoming: *The Invention of Creativity: Modern Society and
the Culture of the New*, trans. Steven Black (Cambridge: Polity, 2017).
[52](#c1-note-0052a){#c1-note-0052} Gert Selle, *Geschichte des Design
in Deutschland* (Frankfurt am Main: Campus, 2007).
[53](#c1-note-0053a){#c1-note-0053} "Less Is More: The Design Ethos of
Dieter Rams," *SFMOMA* (June 29, 2011), online.[]{#Page_181
type="pagebreak" title="181"}
[54](#c1-note-0054a){#c1-note-0054} The cybernetic perspective was
introduced to the field of design primarily by Buckminster Fuller. See
Diedrich Diederichsen and Anselm Franke, *The Whole Earth: California
and the Disappearance of the Outside* (Berlin: Sternberg, 2013).
[55](#c1-note-0055a){#c1-note-0055} Clive Dilnot, "Design as a Socially
Significant Activity: An Introduction," *Design Studies* 3/3 (1982):
139--46.
[56](#c1-note-0056a){#c1-note-0056} Victor J. Papanek, *Design for the
Real World: Human Ecology and Social Change* (New York: Pantheon, 1972),
p. 2.
[57](#c1-note-0057a){#c1-note-0057} Reckwitz, *Die Erfindung der
Kreativität*.
[58](#c1-note-0058a){#c1-note-0058} B. Joseph Pine and James H.
Gilmore, *The Experience Economy: Work Is Theater and Every Business Is
a Stage* (Boston, MA: Harvard Business School Press, 1999), p. ix (the
emphasis is original).
[59](#c1-note-0059a){#c1-note-0059} Mona El Khafif, *Inszenierter
Urbanismus: Stadtraum für Kunst, Kultur und Konsum im Zeitalter der
Erlebnisgesellschaft* (Saarbrücken: VDM Verlag Dr. Müller, 2013).
[60](#c1-note-0060a){#c1-note-0060} Konrad Becker and Martin Wassermair
(eds), *Phantom Kulturstadt* (Vienna: Löcker, 2009).
[61](#c1-note-0061a){#c1-note-0061} See, for example, Andres Bosshard,
*Stadt hören: Klangspaziergänge durch Zürich* (Zurich: NZZ Libro,
2009).
[62](#c1-note-0062a){#c1-note-0062} "An alternate realty game (ARG),"
according to Wikipedia, "is an interactive networked narrative that uses
the real world as a platform and employs transmedia storytelling to
deliver a story that may be altered by players\' ideas or actions."
[63](#c1-note-0063a){#c1-note-0063} Eric von Hippel, *Democratizing
Innovation* (Cambridge, MA: MIT Press, 2005).
[64](#c1-note-0064a){#c1-note-0064} It is often the case that the
involvement of users simply serves to increase the efficiency of
production processes and customer service. Many activities that were
once undertaken at the expense of businesses now have to be carried out
by the customers themselves. See Günter Voss, *Der arbeitende Kunde:
Wenn Konsumenten zu unbezahlten Mitarbeitern werden* (Frankfurt am Main:
Campus, 2005).
[65](#c1-note-0065a){#c1-note-0065} Beniger, *The Control Revolution*,
pp. 411--16.
[66](#c1-note-0066a){#c1-note-0066} Louis Althusser, "Ideology and
Ideological State Apparatuses (Notes towards an Investigation)," in
Althusser, *Lenin and Philosophy and Other Essays*, trans. Ben Brewster
(New York: Monthly Review Press, 1971), pp. 127--86.
[67](#c1-note-0067a){#c1-note-0067} Florian Becker et al. (eds),
*Gramsci lesen! Einstiege in die Gefängnishefte* (Hamburg: Argument,
2013), pp. 20--35.
[68](#c1-note-0068a){#c1-note-0068} Guy Debord, *The Society of the
Spectacle*, trans. Fredy Perlman and Jon Supak (Detroit: Black & Red,
1977).
[69](#c1-note-0069a){#c1-note-0069} Derrick de Kerckhove, "McLuhan and
the Toronto School of Communication," *Canadian Journal of
Communication* 14/4 (1989): 73--9.[]{#Page_182 type="pagebreak"
title="182"}
[70](#c1-note-0070a){#c1-note-0070} Marshall McLuhan, *Understanding
Media: The Extensions of Man* (New York: McGraw-Hill, 1964).
[71](#c1-note-0071a){#c1-note-0071} Nam Jun Paik, "Exposition of Music
-- Electronic Television" (leaflet accompanying the exhibition). Quoted
from Zhang Ga, "Sounds, Images, Perception and Electrons," *Douban*
(March 3, 2016), online.
[72](#c1-note-0072a){#c1-note-0072} Laura R. Linder, *Public Access
Television: America\'s Electronic Soapbox* (Westport, CT: Praeger,
1999).
[73](#c1-note-0073a){#c1-note-0073} Hans Magnus Enzensberger,
"Constituents of a Theory of the Media," in Noah Wardrip-Fruin and Nick
Montfort (eds), *The New Media Reader* (Cambridge, MA: MIT Press, 2003),
pp. 259--75.
[74](#c1-note-0074a){#c1-note-0074} Paul Groot, "Rabotnik TV,"
*Mediamatic* 2/3 (1988), online.
[75](#c1-note-0075a){#c1-note-0075} Inke Arns, "Social Technologies:
Deconstruction, Subversion and the Utopia of Democratic Communication,"
*Medien Kunst Netz* (2004), online.
[76](#c1-note-0076a){#c1-note-0076} The term was coined at a series of
conferences titled The Next Five Minutes (N5M), which were held in
Amsterdam from 1993 to 2003. See \<\>.
[77](#c1-note-0077a){#c1-note-0077} Mark Dery, *Culture Jamming:
Hacking, Slashing and Sniping in the Empire of Signs* (Westfield: Open
Media, 1993); Luther Blisset et al., *Handbuch der
Kommunikationsguerilla*, 5th edn (Berlin: Assoziationen A, 2012).
[78](#c1-note-0078a){#c1-note-0078} Critical Art Ensemble, *Electronic
Civil Disobedience and Other Unpopular Ideas* (New York: Autonomedia,
1996).
[79](#c1-note-0079a){#c1-note-0079} Today this method is known as a
"distributed denial of service attack" (DDOS).
[80](#c1-note-0080a){#c1-note-0080} Max Weber, *Economy and Society: An
Outline of Interpretive Sociology*, trans. Guenther Roth and Claus
Wittich (Berkeley, CA: University of California Press, 1978), pp. 26--8.
[81](#c1-note-0081a){#c1-note-0081} Ernst Friedrich Schumacher, *Small
Is Beautiful: Economics as if People Mattered*, 8th edn (New York:
Harper Perennial, 2014).
[82](#c1-note-0082a){#c1-note-0082} Fred Turner, *From Counterculture
to Cyberculture: Stewart Brand, the Whole Earth Movement and the Rise of
Digital Utopianism* (Chicago, IL: University of Chicago Press, 2006), p.
21. In this regard, see also the documentary films *Das Netz* by Lutz
Dammbeck (2003) and *All Watched Over by Machines of Loving Grace* by
Adam Curtis (2011).
[83](#c1-note-0083a){#c1-note-0083} It was possible to understand
cybernetics as a language of free markets or also as one of centralized
planned economies. See Slava Gerovitch, *From Newspeak to Cyberspeak: A
History of Soviet Cybernetics* (Cambridge, MA: MIT Press, 2002). The
great interest of Soviet scientists in cybernetics rendered the term
rather suspicious in the West, where it was disassociated from
artificial intelligence.[]{#Page_183 type="pagebreak" title="183"}
[84](#c1-note-0084a){#c1-note-0084} Claus Pias, "The Age of
Cybernetics," in Pias (ed.), *Cybernetics: The Macy Conferences
1946--1953* (Zurich: Diaphanes, 2016), pp. 11--27.
[85](#c1-note-0085a){#c1-note-0085} Norbert Wiener, one of the
cofounders of cybernetics, explained this as follows in 1950: "In giving
the definition of Cybernetics in the original book, I classed
communication and control together. Why did I do this? When I
communicate with another person, I impart a message to him, and when he
communicates back with me he returns a related message which contains
information primarily accessible to him and not to me. When I control
the actions of another person, I communicate a message to him, and
although this message is in the imperative mood, the technique of
communication does not differ from that of a message of fact.
Furthermore, if my control is to be effective I must take cognizance of
any messages from him which may indicate that the order is understood
and has been obeyed." Norbert Wiener, *The Human Use of Human Beings:
Cybernetics and Society*, 2nd edn (London: Free Association Books,
1989), p. 16.
[86](#c1-note-0086a){#c1-note-0086} Though presented here as distinct,
these interests could in fact be held by one and the same person. In
*From Counterculture to Cyberculture*, for instance, Turner discusses
"countercultural entrepreneurs."
[87](#c1-note-0087a){#c1-note-0087} Richard Brautigan, "All Watched
Over by Machines of Loving Grace," in *All Watched Over by Machines of
Loving Grace*, by Brautigan (San Francisco: The Communication Company,
1967).
[88](#c1-note-0088a){#c1-note-0088} David D. Clark, "A Cloudy Crystal
Ball: Visions of the Future," *Internet Engineering Taskforce* (July
1992), online.
[89](#c1-note-0089a){#c1-note-0089} Castells, *The Rise of the Network
Society*.
[90](#c1-note-0090a){#c1-note-0090} Bill Gates, "An Open Letter to
Hobbyists," *Homebrew Computer Club Newsletter* 2/1 (1976): 2.
[91](#c1-note-0091a){#c1-note-0091} Richard Stallman, "What Is Free
Software?", *GNU Operating System*, online.
[92](#c1-note-0092a){#c1-note-0092} The fundamentally cooperative
nature of programming was recognized early on. See Gerald M. Weinberg,
*The Psychology of Computer Programming*, rev. edn (New York: Dorset
House, 1998 \[originally published in 1971\]).
[93](#c1-note-0093a){#c1-note-0093} On the history of free software,
see Volker Grassmuck, *Freie Software: Zwischen Privat- und
Gemeineigentum* (Berlin: Bundeszentrale für politische Bildung, 2002).
[94](#c1-note-0094a){#c1-note-0094} In his first email on the topic, he
wrote: "Hello everybody out there \[...\]. I'm doing a (free) operating
system (just a hobby, won\'t be big and professional like gnu) \[...\].
This has been brewing since April, and is starting to get ready. I\'d
like any feedback on things people like/dislike." Linus Torvalds, "What
[]{#Page_184 type="pagebreak" title="184"}Would You Like to See Most in
Minix," *Usenet Group* (August 1991), online.
[96](#c1-note-0096a){#c1-note-0096} From 1997 to 2003, the average use
of online media in Germany climbed from 76 to 138 minutes per day, and
by 2013 it reached 169 minutes. Over the same span of time, the average
frequency of use increased from 3.3 to 4.4 days per week, and by 2013 it
was 5.8. From 2007 to 2013, the percentage of people who were members of
private social networks like Facebook grew from 15 percent to 46
percent. Of these, nearly 60 percent -- around 19 million people -- used
such services on a daily basis. The source of this information is the
article cited in the previous note.
[97](#c1-note-0097a){#c1-note-0097} "Internet Access Is 'a Fundamental
Right'," *BBC News* (8 March 2010), online.
[98](#c1-note-0098a){#c1-note-0098} Manuel Castells, *The Power of
Identity* (Oxford: Blackwell, 1997), pp. 7--22.
:::
:::
[II]{.chapterNumber} [Forms]{.chapterTitle} {#c2}
::: {.section}
With the emergence of the internet around the turn of the millennium as
an omnipresent infrastructure for communication and coordination,
previously independent cultural developments began to spread beyond
their specific original contexts, mutually influencing and enhancing one
another, and becoming increasingly intertwined. Out of a disconnected
conglomeration of more or less marginalized practices, a new and
specific cultural environment thus took shape, usurping or marginalizing
an ever greater variety of cultural constellations. The following
discussion will focus on three *forms* of the digital condition; that
is, on those formal qualities that (notwithstanding all of its internal
conflicts and contradictions) lend a particular shape to this cultural
environment as a whole: *referentiality*, *communality*, and
*algorithmicity*. It is only because most of the cultural processes
operating under the digital condition are characterized by common formal
features such as these that it is reasonable to speak of the digital
condition in the singular.
"Referentiality" is a method with which individuals can inscribe
themselves into cultural processes and constitute themselves as
producers. Understood as shared social meaning, the arena of culture
entails that such an undertaking cannot be limited to the individual.
Rather, it takes place within a larger framework whose existence and
development depend on []{#Page_58 type="pagebreak" title="58"}communal
formations. "Algorithmicity" denotes those aspects of cultural processes
that are (pre-)arranged by the activities of machines. Algorithms
transform the vast quantities of data and information that characterize
so many facets of present-day life into dimensions and formats that can
be registered by human perception. It is impossible to read the content
of billions of websites. Therefore we turn to services such as Google\'s
search algorithm, which reduces the data flood ("big data") to a
manageable amount and translates it into a format that humans can
understand ("small data"). Without them, human beings could not
comprehend or do anything within a culture built around digital
technologies, but they influence our understanding and activity in an
ambivalent way. They create new dependencies by pre-sorting and making
the (informational) world available to us, yet simultaneously ensure our
autonomy by providing the preconditions that enable us to act.
:::
In the digital condition, one of the methods (if not *the* most
fundamental method) enabling humans to participate -- alone or in groups
-- in the collective negotiation of meaning is the system of creating
references. In a number of arenas, referential processes play an
important role in the assignment of both meaning and form. According to
the art historian André Rottmann, for instance, "one might claim that
working with references has in recent years become the dominant
production-aesthetic model in contemporary
art."[^1^](#c2-note-0001){#c2-note-0001a} This burgeoning engagement
with references, however, is hardly restricted to the world of
contemporary art. Referentiality is a feature of many processes that
encompass the operations of various genres of professional and everyday
culture. In its essence, it is the use of materials that are already
equipped with meaning -- as opposed to so-called raw material -- to
create new meanings. The referential techniques used to achieve this are
extremely diverse, a fact reflected in the numerous terms that exist to
describe them: re-mix, re-make, re-enactment, appropriation, sampling,
meme, imitation, homage, tropicália, parody, quotation, post-production,
re-performance, []{#Page_59 type="pagebreak" title="59"}camouflage,
(non-academic) research, re-creativity, mashup, transformative use, and
so on.
These processes have two important aspects in common: the
recognizability of the sources and the freedom to deal with them however
one likes. The first creates an internal system of references from which
meaning and aesthetics are derived in an essential
manner.[^2^](#c2-note-0002){#c2-note-0002a} The second is the
precondition enabling the creation of something that is both new and on
the same level as the re-used material. This represents a clear
departure from the historical--critical method, which endeavors to embed
a source in its original context in order to re-determine its meaning,
but also a departure from classical forms of rendition such as
translations, adaptations (for instance, adapting a book for a film), or
cover versions, which, though they translate a work into another
language or medium, still attempt to preserve its original meaning.
Re-mixes produced by DJs are one example of the referential treatment of
source material. In his book on the history of DJ culture, the
journalist Ulf Poschardt notes: "The remixer isn\'t concerned with
salvaging authenticity, but with creating a new
authenticity."[^3^](#c2-note-0003){#c2-note-0003a} For instead of
distancing themselves from the past, which would follow the (Western)
logic of progress or the spirit of the avant-garde, these processes
refer explicitly to precursors and to existing material. In one and the
same gesture, both one\'s own new position and the context and cultural
tradition that is being carried on in one\'s own work are constituted
performatively; that is, through one\'s own activity in the moment. I
will discuss this phenomenon in greater depth below.
To work with existing cultural material is, in itself, nothing new. In
modern montages, artists likewise drew upon available texts, images, and
treated materials. Yet there is an important difference: montages were
concerned with bringing together seemingly incongruous but stable
"finished pieces" in a more or less unmediated and fragmentary manner.
This is especially clear in the collages by the Dadaists or in
Expressionist literature such as Alfred Döblin\'s *Berlin
Alexanderplatz*. In these works, the experience of Modernity\'s many
fractures -- its fragmentation and turmoil -- was given a new aesthetic
form. In his reference to montages, Adorno thus observed that the
"negation of synthesis becomes a principle []{#Page_60 type="pagebreak"
title="60"}of form."[^4^](#c2-note-0004){#c2-note-0004a} At least for a
brief moment, he considered them an adequate expression for the
impossibility of reconciling the contradictions of capitalist culture.
Influenced by Adorno, the literary theorist Peter Bürger went so far as
to call the montage the true "paradigm of
modernity."[^5^](#c2-note-0005){#c2-note-0005a} In today\'s referential
processes, on the contrary, pieces are not brought together as much as
they are integrated into one another by being altered, adapted, and
transformed. Unlike the older arrangement, it is not the fissures
between elements that are foregrounded but rather their synthesis in the
present. Conchita Wurst, the bearded diva, is not torn between two
conflicting poles. Rather, she represents a successful synthesis --
something new and harmonious that distinguishes itself by showcasing
elements of the old order (man/woman) and simultaneously transcending
them.
This synthesis, however, is usually just temporary, for at any time it
can itself serve as material for yet another rendering. Of course, this
is far easier to pull off with digital objects than with analog objects,
though these categories have become increasingly porous and thus
increasingly problematic as opposites. More and more objects exist both
in an analog and in a digital form. Think of photographs and slides,
which have become so easy to digitalize. Even three-dimensional objects
can now be scanned and printed. In the future, programmable materials
with controllable and reversible features will cause the difference
between the two domains to vanish: analog is becoming more and more
digital.
Montages and referential processes can only become widespread methods
if, in a given society, cultural objects are available in three
different respects. The first is economic and organizational: they must
be affordable and easily accessible. Whoever is unable to afford books
or get hold of them by some other means will not be able to reconfigure
any texts. The second is cultural: working with cultural objects --
which can always create deviations from the source in unpredictable ways
-- must not be treated as taboo or illegal, but rather as an everyday
activity without any special preconditions. It is much easier to
manipulate a text from a secular newspaper than one from a religious
canon. The third is material: it must be possible to use the material
and to change it.[^6[]{#Page_61 type="pagebreak"
title="61"}^](#c2-note-0006){#c2-note-0006a}
In terms of this third form of availability, montages differ from
referential processes, for cultural objects can be integrated into one
another -- instead of simply being placed side by side -- far more
readily when they are digitally coded. Information is digitally coded
when it is stored by means of a limited system of discrete (that is,
separated by finite intervals or distances) signs that are meaningless
in themselves. This allows information to be copied from one carrier to
another without any loss and it allows the respective signs, whether
individually or in groups, to be arranged freely. Seen in this way,
digital coding is not necessarily bound to computers but can rather be
realized with all materials: a mosaic is a digital process in which
information is coded by means of variously colored tiles, just as a
digital image consists of pixels. In the case of the mosaic, of course,
the resolution is far lower. Alphabetic writing is a form of coding
linguistic information by means of discrete signs that are, in
themselves, meaningless. Consequently, Florian Cramer has argued that
"every form of literature that is recorded alphabetically and not based
on analog parameters such as ideograms or orality is already digital in
that it is stored in discrete
signs."[^7^](#c2-note-0007){#c2-note-0007a} However, the specific
features of the alphabet, as Marshall McLuhan repeatedly underscored,
did not fully develop until the advent of the printing
press.[^8^](#c2-note-0008){#c2-note-0008a} It was the printing press, in
other words, that first abstracted written signs from analog handwriting
and transformed them into standardized symbols that could be repeated
without any loss of information. In this practical sense, the printing
press made writing digital, with the result that dealing with texts soon
became radically different.
::: {.section}
### Information overload 1.0 {#c2-sec-0003}
The printing press made texts available in the three respects mentioned
above. For one thing, their number increased rapidly, while their price
significantly sank. During the first two generations after Gutenberg\'s
invention -- that is, between 1450 and 1500 -- more books were produced
than during the thousand years
before.[^9^](#c2-note-0009){#c2-note-0009a} And that was just the
beginning. Dealing with books and their content changed from the ground
up. In manuscript culture, every new copy represented a potential
degradation of the original, and therefore []{#Page_62 type="pagebreak"
title="62"}the oldest sources (those that had undergone as little
corruption as possible) were valued above all. With the advent of print
culture, the idea took hold that texts could be improved by the process
of editing, not least because the availability of old sources, through
reprints and facsimiles, had also improved dramatically. Pure
reproduction was mechanized and overcome as a cultural challenge.
According to the historian Elizabeth Eisenstein, one of the first
consequences of the greatly increased availability of the printed book
was that it overcame the "tyranny of major authorities, which was common
in small libraries."[^10^](#c2-note-0010){#c2-note-0010a} Scientists
were now able to compare texts with one another and critique them to an
unprecedented extent. Their general orientation turned around: instead
of looking back in order to preserve what they knew, they were now
looking ahead toward what they might not (yet) know.
In order to organize this information flood of rapidly amassing texts,
it was necessary to create new conventions: books were now specified by
their author, publisher, and date of publication, not to mention
furnished with page numbers. This enabled large numbers of texts to be
catalogued and every individual text -- indeed, every single passage --
to be referenced.[^11^](#c2-note-0011){#c2-note-0011a} Scientists could
legitimize the pursuit of new knowledge by drawing attention to specific
mistakes or gaps in existing texts. In the scientific culture that was
developing at the time, the close connection between old and new
material was not simply regarded as something positive; it was also
urgently prescribed as a method of argumentation. Every text had to
contain an internal system of references, and this was the basis for the
development of schools, disciplines, and specific discourses.
The digital character of printed writing also made texts available in
the third respect mentioned above. Because discrete signs could be
reproduced without any loss of information, it was possible not only to
make perfect copies but also to remove content from one carrier and
transfer it to another. Materials were no longer simply arranged
sequentially, as in medieval compilations and almanacs, but manipulated
to give rise to a new and independent fluid text. A set of conventions
was developed -- one that remains in use today -- for modifying embedded
or quoted material in order for it []{#Page_63 type="pagebreak"
title="63"}to fit into its new environment. In this manner, quotations
could be altered in such a way that they could be integrated seamlessly
into a new text while remaining recognizable as direct citations.
Several of these conventions, for instance the use of square brackets to
indicate additions ("\[ \]") or ellipses to indicate omissions ("..."),
are also used in this very book. At the same time, the conventions for
making explicit references led to the creation of an internal reference
system that made the singular position of the new text legible within a
collective field of work. "Printing," to quote Elizabeth Eisenstein once
again, "encouraged forms of combinatory activity which were social as
well as intellectual. It changed relationships between men of learning
as well as between systems of
ideas."[^12^](#c2-note-0012){#c2-note-0012a} Exchange between scholars,
in the form of letters and visits, intensified. The seventeenth century
saw the formation of the *respublica literaria* or the "Republic of
Letters," a loose network of scholars devoted to promoting the ideas of
the Enlightenment. Beginning in the eighteenth century, the rapidly
growing number of scientific fields was arranged and institutionalized
into clearly distinct disciplines. In the nineteenth and twentieth
centuries, diverse media-technical innovations made images, sounds, and
moving images available, though at first only in analog formats. These
created the preconditions that enabled the montage in all of its forms
-- film cuts, collages, readymades, *musique concrète*, found-footage
films, literary cut-ups, and artistic assemblages (to name only the
best-known genres) -- to become the paradigm of Modernity.
:::
::: {.section}
### Information overload 2.0 {#c2-sec-0004}
It was not until new technical possibilities for recording, storing,
processing, and reproduction appeared over the course of the 1990s that
it also became increasingly possible to code and edit images, audio, and
video digitally. Through the networking that was taking place not far
behind, society was flooded with an unprecedented amount of digitally
coded information *of every sort*, and the circulation of this
information accelerated. This was not, however, simply a quantitative
change but also and above all a qualitative one. Cultural materials
became available in a comprehensive []{#Page_64 type="pagebreak"
title="64"}sense -- economically and organizationally, culturally
(despite legal problems), and materially (because digitalized). Today it
would not be bold to predict that nearly every text, image, or sound
will soon exist in a digital form. Most of the new reproducible works
are already "born digital" and digitally distributed, or they are
physically produced according to digital instructions. Many initiatives
are working to digitalize older, analog works. We are now anchored in
the digital.
Among the numerous digitalization projects currently under way, the most
ambitious is that of Google Books, which, since its launch in 2004, has
digitalized around 20 million books from the collections of large
libraries and prepared them for full-text searches. Right from the
start, a fierce debate arose about the legal and cultural acceptability
of this project. One concern was whether Google\'s process infringed
upon the rights of the authors and publishers of the scanned books or
whether, according to American law, it qualified as "fair use," in which
case there would be no obligation for the company to seek authorization
or offer compensation. The second main concern was whether it would be
culturally or politically appropriate for a private corporation to hold
a de facto monopoly over the digital heritage of book culture. The first
issue incited a complex legal battle that, in 2013, was decided in
Google\'s favor by a judge on the United States District Court in New
York.[^13^](#c2-note-0013){#c2-note-0013a} At the heart of the second
issue was the question of how a public library should look in the
twenty-first century.[^14^](#c2-note-0014){#c2-note-0014a} In November
of 2008, the European Commission and the cultural minister of the
European Union launched the virtual Europeana library, which occurred
after a number of European countries had already invested hundreds of
millions of euros in various digitalization
initiatives.[^15^](#c2-note-0015){#c2-note-0015a} Today, Europeana
serves as a common access point to the online archives of around 2,500
European cultural institutions. By the end of 2015, its digital holdings
had grown to include more than 40 million objects. This is still,
however, a relatively small number, for it has been estimated that
European archives and museums contain more than 220 million
natural-historical and more than 260 million cultural-historical
objects. In the United States, discussions about the future of libraries
[]{#Page_65 type="pagebreak" title="65"}led to the 2013 launch of the
Digital Public Library of America (DPLA), which, like Europeana,
provides common access to the digitalized holdings of archives, museums,
and libraries. By now, more than 14 million items can be viewed there.
In one way or another, however, both the private and the public projects
of this sort have been limited by binding copyright laws. The librarian
and book historian Robert Darnton, one of the most prominent advocates
of the Digital Public Library of America, has accordingly stated: "The
main impediment to the DPLA\'s growth is legal, not financial. Copyright
laws could exclude everything published after 1964, most works published
after 1923, and some that go back as far as
1873."[^16^](#c2-note-0016){#c2-note-0016a} The legal situation in
Europe is similar to that in the United States. It, too, massively
obstructs the work of public
institutions.[^17^](#c2-note-0017){#c2-note-0017a} In many cases, this
has had the absurd consequence that certain materials, though they have
been fully digitalized, may only be accessed in part or exclusively
inside the facilities of a particular institution. Whereas companies
such as Google can afford to wage long legal battles, and in the
meantime create precedents, public institutions must proceed with great
caution, not least to avoid the accusation of using public funds to
violate copyright laws. Thus, they tend to fade into the background and
leave users, who are unfamiliar with the complex legal situation, with
the impression that they are even more out-of-date than they often are.
Informal actors, who explicitly operate beyond the realm of copyright
law, are not faced with such restrictions. UbuWeb, for instance, which
is the largest online archive devoted to the history of
twentieth-century avant-garde art, was not created by an art museum but
rather by the initiative of an individual artist, Kenneth Goldsmith.
Since 1996, he has been collecting historically relevant materials that
were no longer in distribution and placing them online for free and
without any stipulations. He forgoes the process of obtaining the rights
to certain works of art because, as he remarks on the website, "Let\'s
face it, if we had to get permission from everyone on UbuWeb, there
would be no UbuWeb."[^18^](#c2-note-0018){#c2-note-0018a} It would
simply be too demanding to do so. Because he pursues the project without
any financial interest and has saved so much []{#Page_66
type="pagebreak" title="66"}from oblivion, his efforts have provoked
hardly any legal difficulties. On the contrary, UbuWeb has become so
important that Goldsmith has begun to receive more and more material
directly from artists and their heirs, who would like certain works not
to be forgotten. Nevertheless, or perhaps for this very reason,
Goldsmith repeatedly stresses the instability of his archive, which
could disappear at any moment if he loses interest in maintaining it or
if something else happens. Users are therefore able to download works
from UbuWeb and archive, on their own, whatever items they find most
important. Of course, this fragility contradicts the idea of an archive
as a place for long-term preservation. Yet such a task could only be
undertaken by an institution that is oriented toward the long term.
Because of the existing legal conditions, however, it is hardly likely
that such an institution will come about.
Whereas Goldsmith is highly adept at operating within a niche that not
only tolerates but also accepts the violation of formal copyright
claims, large websites responsible for the uncontrolled dissemination of
digital content do not bother with such niceties. Their purpose is
rather to ensure that all popular content is made available digitally
and for free, whether legally or not. These sites, too, have experienced
uninterrupted growth. By the end of 2015, dozens of millions of people
were simultaneously using the BitTorrent tracker The Pirate Bay -- the
largest nodal point for file-sharing networks during the last decade --
to exchange several million digital files with one
another.[^19^](#c2-note-0019){#c2-note-0019a} And this was happening
despite protracted attempts to block or close down the file-sharing site
by legal means and despite a variety of competing services. Even when
the founders of the website were sentenced in Sweden to pay large fines
(around €3 million) and to serve time in prison, the site still did not
disappear from the internet.[^20^](#c2-note-0020){#c2-note-0020a} At the
same time, new providers have entered the market of free access; their
method is not to facilitate distributed downloads but rather to offer,
on account of the drastically reduced cost of data transfers, direct
streaming. Although some of these services are relatively easy to locate
and some have been legally banned -- the best-known case in Germany
being that of the popular site kino.to -- more of them continue to
appear.[^21^](#c2-note-0021){#c2-note-0021a} Moreover, this phenomenon
[]{#Page_67 type="pagebreak" title="67"}is not limited to music and
films, but encompasses all media formats. For instance, it is
foreseeable that the number of freely available plans for 3D objects
will increase along with the popularity of 3D printing. It has almost
escaped notice, however, that so-called "shadow libraries" have been
popping up everywhere; the latter are not accessible to the public but
rather to members, for instance, of closed exchange platforms or of
university intranets. Few seminars take place any more without a corpus
of scanned texts, regardless of whether this practice is legal or
not.[^22^](#c2-note-0022){#c2-note-0022a}
The lines between these different mechanisms of access are highly
permeable. Content acquired legally can make its way to file-sharing
networks as an illegal copy; content available for free can be sold in
special editions; content from shadow libraries can make its way to
publicly accessible sites; and, conversely, content that was once freely
available can disappear into shadow libraries. As regards free access,
the details of this rapidly changing landscape are almost
inconsequential, for the general trend that has emerged from these
various dynamics -- legal and illegal, public and private -- is
unambiguous: in a comprehensive and practical sense, cultural works of
all sorts will become freely available despite whatever legal and
technical restrictions might be in place. Whether absolutely all
material will be made available in this way is not the decisive factor,
at least not for the individual, for, as the German Library Association
has stated, "it is foreseeable that non-digitalized material will
increasingly escape the awareness of users, who have understandably come
to appreciate the ubiquitous availability and more convenient
processability of the digital versions of analog
objects."[^23^](#c2-note-0023){#c2-note-0023a} In this context of excess
information, it is difficult to determine whether a particular work or a
crucial reference is missing, given that a multitude of other works and
references can be found in their place.
At the same time, prodigious amounts of new material are being produced
that, before the era of digitalization and networks, never could have
existed at all or never would have left the private sphere. An example
of this is amateur photography. This is nothing new in itself; as early
as 1899, Kodak was marketing its films and apparatus with the slogan
"You press the button, we do the rest," and ever since, []{#Page_68
type="pagebreak" title="68"}drawers and albums have been overflowing
with photographs. With the advent of digitalization, however, certain
economic and material limitations ceased to exist that, until then, had
caused most private photographers to think twice about how many shots
they wanted to take. After all, they had to pay for the film to be
developed and then store the pictures somewhere. Cameras also became
increasingly "intelligent," which improved the technical quality of
photographs. Even complex procedures such as increasing the level of
detail or the contrast ratio -- the difference between an image\'s
brightest and darkest points -- no longer require any specialized
knowledge of photochemical processes in the darkroom. Today, such
features are often pre-installed in many cameras as an option (high
dynamic range). Ever since the introduction of built-in digital cameras
for smartphones, anyone with such a device can take pictures everywhere
and at any time and then store them digitally. Images can then be posted
on online platforms and shared with others. By the middle of 2015,
Flickr -- the largest but certainly not the only specialized platform of
this sort -- had more than 112 million registered users participating in
more than 2 million groups. Every user has access to free storage space
for about half a million of his or her own pictures. At that point, in
other words, the platform was equipped to manage more than 55 billion
photographs. Around 3.5 million images were being uploaded every day,
many of which could be accessed by anyone. This may seem like a lot, but
in reality it is just a small portion of the pictures that are posted
online on a daily basis. Around that same time -- again, the middle of
2015 -- approximately 350 million pictures were being posted on Facebook
*every day*. The total number of photographs saved there has been
estimated to be 250 billion. In addition, there are also large platforms
for professional "stock photos" (supplies of pre-produced images that
are supposed to depict generic situations) and the databanks of
professional agencies such Getty Images or Corbis. All of these images
can be found easily and acquired quickly (though not always for free).
Yet photography is not unique in this regard. In all fields, the number
of cultural artifacts available to the public on specialized platforms
has been increasing rapidly in recent years.[]{#Page_69 type="pagebreak"
title="69"}
:::
::: {.section}
### The great disorder {#c2-sec-0005}
The old orders that had been responsible for filtering, organizing, and
publishing cultural material -- culture industries, mass media,
libraries, museums, archives, etc. -- are incapable of managing almost
any aspect of this deluge. They can barely function as gatekeepers any
more between those realms that, with their help, were once defined as
"private" and "public." Their decisions about what is or is not
important matter less and less. Moreover, having already been subjected
to a decades-long critique, their rules, which had been relatively
binding and formative over long periods of time, are rapidly losing
practical significance.
Even Europeana, a relatively small project based on traditional museums
and archives and with a mandate to make the European cultural heritage
available online, has contributed to the disintegration of established
orders: it indiscriminately brings together 2,500 previously separated
institutions. The specific semantic contexts that formerly shaped the
history and orientation of institutions have been dissolved or reduced
to dry meta-data, and millions upon millions of cultural artifacts are
now equidistant from one another. Instead of certain artifacts being
firmly anchored in a location, for instance in an ethnographic
collection devoted to the colonial history of France, it is now possible
for everything to exist side by side. Europeana is not an archive in the
traditional sense, or even a museum with a fixed and meaningful order;
rather, it is just a standard database. Everything in it is just one
search request away, and every search generates a unique order in the
form of a sequence of visible artifacts. As a result, individual objects
are freed from those meta-narratives, created by the museums and
archives that preserve them, which situate them within broader contexts
and assign more or less clear meanings to them. They consequently become
more open to interpretation. A search result does not articulate an
interpretive field of reference but merely a connection, created by
constantly changing search algorithms, between a request and the corpus
of material, which is likewise constantly changing.
Precisely because it offers so many different approaches to more or less
freely combinable elements of information, []{#Page_70 type="pagebreak"
title="70"}the order of the database no longer really provides a
framework for interpreting search results in a meaningful way.
Altogether, the meaning of many objects and signs is becoming even more
uncertain. On the one hand, this is because the connection to their
original context is becoming fragile; on the other hand, it is because
they can appear in every possible combination and in the greatest
variety of reception contexts. In less official archives and in less
specialized search engines, the dissolution of context is far more
pronounced than it is in the case of the Europeana project. For the sake
of orienting its users, for instance, YouTube provides the date when a
video has been posted, but there is no indication of when a video was
actually produced. Further information provided about a video, for
example in the comments section, is essentially unreliable. It might be
true -- or it might not. The internet researcher David Weinberger has
called this the "new digital disorder," which, at least for many users,
is an entirely apt description.[^24^](#c2-note-0024){#c2-note-0024a} For
individuals, this disorder has created both the freedom to establish
their own orders and the obligation of doing so, regardless of whether
or not they are ready for the task.
This tension between freedom and obligation is at its strongest online,
where the excess of culture and its more or less free availability are
immediate and omnipresent. In fact, everything that can be retrieved
online is culture in the sense that everything -- from the deepest layer
of hardware to the most superficial tweet -- has been made by someone
with a particular intention, and everything has been made to fit a
particular order. And it is precisely this excess of often contradictory
meanings and limited, regional, and incompatible orders that leads to
disorder and meaninglessness. This is not limited to the online world,
however, because the latter is not self-contained. In an essential way,
digital media also serve to organize the material world. On the basis of
extremely complex and opaque yet highly efficient logistical and
production processes, people are also confronted with constantly
changing material things about whose origins and meanings they have
little idea. Even something as simple to produce as yoghurt usually has
a thousand kilometers behind it before it ends up on a shelf in the
supermarket. The logistics that enable this are oriented toward
flexibility; []{#Page_71 type="pagebreak" title="71"}they bring elements
together as efficiently as possible. It is nearly impossible for final
customers to find out anything about the ingredients. Customers are
merely supposed to be oriented by signs and notices such as "new" or "as
before," "natural," and "healthy," which are written by specialists and
meant to manipulate shoppers as much as the law allows. Even here, in
corporeal everyday life, every individual has to deal with a surge of
excess and disorder that threatens to erode the original meaning
conferred on every object -- even where such meaning was once entirely
unproblematic, as in the case of
yoghurt.[^25^](#c2-note-0025){#c2-note-0025a}
:::
::: {.section}
### Selecting and organizing {#c2-sec-0006}
In this situation, the creation of one\'s own system of references has
become a ubiquitous and generally accessible method for organizing all
of the ambivalent things that one encounters on a given day. Such things
are thus arranged within a specific context of meaning that also
(co)determines one\'s own relation to the world and subjective position
in it. Referentiality takes place through three types of activity, the
first being simply to attract attention to certain things, which affirms
(at least implicitly) that they are important. With every single picture
posted on Flickr, every tweet, every blog post, every forum post, and
every status update, the user is doing exactly that; he or she is
communicating to others: "Look over here! I think this is important!" Of
course, there is nothing new to filtering and allocating meaning. What
is new, however, is that these processes are no longer being carried out
primarily by specialists at editorial offices, museums, or archives, but
have become daily requirements for a large portion of the population,
regardless of whether they possess the material and cultural resources
that are necessary for the task.
:::
::: {.section}
### The loop through the body {#c2-sec-0007}
Given the flood of information that perpetually surrounds everyone, the
act of focusing attention and reducing vast numbers of possibilities
into something concrete has become a productive achievement, however
banal each of these micro-activities might seem on its own, and even if,
at first, []{#Page_72 type="pagebreak" title="72"}the only concern might
be to focus the attention of the person doing it. The value of this
(often very brief) activity is that it singles out elements from the
uniform sludge of unmanageable complexity. Something plucked out in this
way gains value because it has required the use of a resource that
cannot be reproduced, that exists outside of the world of information
and that is invariably limited for every individual: our own lifetime.
Every status update that is not machine-generated means that someone has
invested time, be it only a second, in order to point to this and not to
something else. Thus, a process of validating what exists in the excess
takes place in connection with the ultimate scarcity -- our own
lifetimes, our own bodies. Even if the value generated by this act is
minimal or diffuse, it is still -- to borrow from Gregory Bateson\'s
famous definition of information -- a difference that makes a difference
in this stream of equivalencies and
meaninglessness.[^26^](#c2-note-0026){#c2-note-0026a} This singling out
-- this use of one\'s own body to generate meaning -- does not, however,
take place by means of mere micro-activities throughout the day; it is
also a defining aspect of complex cultural strategies. In recent years,
re-enactment (that is, the re-staging of historical situations and
events) has established itself as a common practice in contemporary art.
Unlike traditional re-enactments, such as those of historically
significant battles, which attempt to represent the past as faithfully
as possible, "artistic re-enactments," according to the curator Inke
Arns, "are not an affirmative confirmation of the past; rather, they are
*questionings* of the present through reaching back to historical
events," especially as they are represented in images and other forms of
documentation. Thanks to search engines and databases, such
representations are more or less always present, though in the form of
indeterminate images, ambivalent documents, and contentious
interpretations. Artists in this situation, as Arns explains,
::: {.extract}
do not ask the naïve question about what really happened outside of the
history represented in the media -- the "authenticity" beyond the images
-- instead, they ask what the images we see might mean concretely to us,
if we were to experience these situations personally. In this way the
artistic reenactment confronts the general feeling of insecurity about
the meaning []{#Page_73 type="pagebreak" title="73"}of images by using a
paradoxical approach: through erasing distance to the images and at the
same time distancing itself from the
images.[^27^](#c2-note-0027){#c2-note-0027a}
:::
This paradox manifests itself in that the images are appropriated and
sublated through the use of one\'s own body in the re-enactments. They
simultaneously refer to the past and create a new reality in the
present. In perhaps the best-known re-enactment of this type, the artist
Jeremy Deller revived, in 2001, the Battle of Orgreave, one of the
central episodes of the British miners\' strike of 1984 and 1985. This
historical event is regarded as a turning point in the protracted
conflict between Margaret Thatcher\'s government and the labor unions --
a key moment in the implementation of Great Britain\'s neoliberal
regime, which is still in effect today. In Deller\'s re-enactment, the
heart of the matter is not historical accuracy, which is always
controversial in such epoch-changing events. Rather, he focuses on the
former participants -- the miners and police officers alike, who, along
with non-professional actors, lived through the situation again -- in
order to explore both the distance from the events and their
representation in the media, as well as their ongoing biographical and
societal presence.[^28^](#c2-note-0028){#c2-note-0028a}
Elaborate practices of embodying medial images through processes of
appropriation and distancing have also found their way into popular
culture, for instance in so-called "cosplay." The term, which is a
contraction of the words "costume" and "play," was coined by a Japanese
man named Nobuyuki Takahashi. In 1984, while attending the World Science
Fiction Convention in Los Angeles, he used the word to describe the
practice of certain attendees to dress up as their favorite characters.
Participants in cosplay embody fictitious figures -- mostly from the
worlds of science fiction, comics/manga, or computer games -- by donning
home-made costumes and striking characteristic
poses.[^29^](#c2-note-0029){#c2-note-0029a} The often considerable
effort that goes into this is mostly reflected in the costumes, not in
the choreography or dramaturgy of the performance. What is significant
is that these costumes are usually not exact replicas but are rather
freely adapted by each player to represent the character as he or she
interprets it to be. Accordingly, "Cosplay is a form of appropriation
[]{#Page_74 type="pagebreak" title="74"}that transforms, actualizes and
performs an existing story in close connection to the fan\'s own
identity."[^30^](#c2-note-0030){#c2-note-0030a} This practice,
admittedly, goes back quite far in the history of fan culture, but it
has experienced a striking surge through the opportunity for fans to
network with one another around the world, to produce costumes and
images of professional quality, and to place themselves on the same
level as their (fictitious) idols. By now it has become a global
subculture whose members are active not only online but also at hundreds
of conventions throughout the world. In Germany, an annual cosplay
competition has been held since 2007 (it is organized by the Frankfurt
Book Fair and Animexx, the country\'s largest manga and anime
community). The scene, which has grown and branched out considerably
over the past few years, has slowly begun to professionalize, with
shops, books, and players who make paid appearances. Even in fan
culture, stars are born. As soon as the subculture has exceeded a
certain size, this gradual onset of commercialization will undoubtedly
lead to tensions within the community. For now, however, two of its
noteworthy features remain: the power of the desire to appropriate, in a
bodily manner, characters from vast cultural universes, and the
widespread combination of free interpretation and meticulous attention
to detail.
:::
::: {.section}
### Lineages and transformations {#c2-sec-0008}
Because of the great effort tha they require, re-enactment and cosplay
are somewhat extreme examples of singling out, appropriating, and
referencing. As everyday activities that almost take place incidentally,
however, these three practices usually do not make any significant or
lasting differences. Yet they do not happen just once, but over and over
again. They accumulate and thus constitute referentiality\'s second type
of activity: the creation of connections between the many things that
have attracted attention. In such a way, paths are forged through the
vast complexity. These paths, which can be formed, for instance, by
referring to different things one after another, likewise serve to
produce and filter meaning. Things that can potentially belong in
multiple contexts are brought into a single, specific context. For the
individual []{#Page_75 type="pagebreak" title="75"}producer, this is how
fields of attention, reference systems, and contexts of meaning are
first established. In the third step, the things that have been selected
and brought together are changed. Perhaps something is removed to modify
the meaning, or perhaps something is added that was previously absent or
unavailable. Either way, referential culture is always producing
something new.
These processes are applied both within individual works (referentiality
in a strict sense) and within currents of communication that consist of
numerous molecular acts (referentiality in a broader sense). This latter
sort of compilation is far more widespread than the creation of new
re-mix works. Consider, for example, the billionfold sequences of status
updates, which sometimes involve a link to an interesting video,
sometimes a post of a photograph, then a short list of favorite songs, a
top 10 chart from one\'s own feed, or anything else. Such methods of
inscribing oneself into the world by means of references, combinations,
or alterations are used to create meaning through one\'s own activity in
the world and to constitute oneself in it, both for one\'s self and for
others. In a culture that manifests itself to a great extent through
mediatized communication, people have to constitute themselves through
such acts, if only by posting
"selfies."[^31^](#c2-note-0031){#c2-note-0031a} Not to do so would be to
risk invisibility and being forgotten.
On this basis, a genuine digital folk culture of re-mixing and mashups
has formed in recent years on online platforms, in game worlds, but also
through cultural-economic productions of individual pieces or short
series. It is generated and maintained by innumerable people with
varying degrees of intensity and ambition. Its common feature with
traditional folk culture, in choirs or elsewhere, is that production
and reception (but also reproduction and creation) largely coincide.
Active participation admittedly requires a certain degree of
proficiency, interest, and engagement, but usually not any extraordinary
talent. Many classical institutions such as museums and archives have
been attempting to take part in this folk culture by setting up their
own re-mix services. They know that the "public" is no longer able or
willing to limit its engagement with works of art and cultural history
to one of quiet contemplation. At the end of 2013, even []{#Page_76
type="pagebreak" title="76"}the Deutsches Symphonie-Orchester Berlin
initiated a re-mix competition. A year earlier, the Rijksmuseum in
Amsterdam launched so-called "Rijksstudios." Since then, the museum has
made available on its website more than 200,000 high-resolution images
from its collection. Users are free to use these to create their own
re-mixes online and share them with others. Interestingly, the
Rijksmuseum does not distinguish between the work involved in
transforming existing pieces and that involved in curating its own
online gallery.
Referential processes have no beginning and no end. Any material that is
used to make something new has a pre-history of its own, even if its
traces are lost in clouds of uncertainty. Upon closer inspection, this
cloud might clear a little bit, but it is extremely uncommon for a
genuine beginning -- a *creatio ex nihilo* -- to be revealed. This
raises the question of whether there can really be something like
originality in the emphatic sense.[^32^](#c2-note-0032){#c2-note-0032a}
Regardless of the answer to this question, the fact that by now many
people select, combine, and alter objects on a daily basis has led to a
slow shift in our perception and sensibilities. In light of the
experiences that so many people are creating, the formerly exotic
theories of deconstruction suddenly seem anything but outlandish. Nearly
half a century ago, Roland Barthes defined the text as a fabric of
quotations, and this incited vehement
opposition.[^33^](#c2-note-0033){#c2-note-0033a} "But of course," one
would be inclined to say today, "that can be statistically proven
through software analysis!" Amazon identifies books by means of their
"statistically improbable phrases"; that is, by means of textual
elements that are highly unlikely to occur elsewhere. This implies, of
course, that books contain many textual elements that are highly likely
to be found in other texts, without suggesting that such elements would
have to be regarded as plagiarism.
In the Gutenberg Galaxy, with its fixation on writing, the earliest
textual document is usually understood to represent a beginning. If no
references to anything before can be identified, the text is then
interpreted as a closed entity, as a new text. Thus, fairy tales and
sagas, which are typical elements of oral culture, are still more
strongly associated with the names of those who recorded them than with
the names of those who narrated them. This does not seem very convincing
today. In recent years, literary historians have made strong []{#Page_77
type="pagebreak" title="77"}efforts to shift the focus of attention to
the people (mostly women) who actually told certain fairy tales. In
doing so, they have been able to work out to what extent the respective
narrators gave shape to specific stories, which were written down as
common versions, and to what extent these stories reflect their
narrators\' personal histories.[^34^](#c2-note-0034){#c2-note-0034a}
Today, after more than 40 years of deconstructionist theory and a change
in our everyday practices, it is no longer controversial to read works
-- even by canonical figures like Wagner or Mozart -- in such a way as
to highlight the other works, either by the artists in question or by
other artists, that are contained within
them.[^35^](#c2-note-0035){#c2-note-0035a} This is not an expression of
decreased appreciation but rather an indication that, as Zygmunt Bauman
has stressed, "The way human beings understand the world tends to be at
all times *praxeomorphic*: it is always shaped by the know-how of the
day, by what people can do and how they usually go about doing
it."[^36^](#c2-note-0036){#c2-note-0036a} And the everyday practice of
today is one of singling out, bringing together, altering, and adding.
Accordingly, not only has our view of current cultural production
shifted; our view of cultural history has shifted as well. As always,
the past is made to suit the sensibilities of the present.
As a rule, however, things that have no beginning also have no end. This
is not only because they can in turn serve as elements for other new
contexts of meaning, but also because the attention paid to the context
in which they take on specific meaning is sensitive to the work that has
to be done to maintain the context itself. Even timelessness is an
elaborate everyday business. The attempt to rescue works of art from the
ravages of time -- to preserve them forever -- means that they regularly
need to be restored. Every restoration inevitably stirs a debate about
whether the planned interventions are appropriate and about how to deal
with the traces of previous interventions, which, from the current
perspective, often seem to be highly problematic. Whereas, just a
generation ago, preservationists ensured that such interventions
remained visible (as articulations of the historical fissures that are
typical of Modernity), today greater emphasis is placed on reducing
their visibility and re-creating the illusion of an "original condition"
(without, however, impeding any new functionality that a piece might
have in the present). []{#Page_78 type="pagebreak" title="78"}The
historically faithful restoration of the Berlin City Palace, and yet its
repurposed function as a museum and meeting place, are typical of this
new attitude in dealing with our historical heritage.
In everyday activity, too, the never-ending necessity of this work can
be felt at all times. Here the issue is not timelessness, but rather
that the established contexts of meaning quickly become obsolete and
therefore have to be continuously affirmed, expanded, and changed in
order to maintain the relevance of the field that they define. This
lends referentiality a performative character that combines productive
and reproductive dimensions. That which is not constantly used and
renewed simply disappears. Often, however, this only means that it will
sink into an endless archive and become unrealized potential until
someone reactivates it, breathes new life into it, rouses it from its
slumber, and incorporates it into a newly relevant context of meaning.
"To be relevant," according to the artist Eran Schaerf, "things must be
recyclable."[^37^](#c2-note-0037){#c2-note-0037a}
Alone, everyone is overwhelmed by the task of having to generate meaning
against this backdrop of all-encompassing meaninglessness. First, the
challenge is too great for any individual to overcome; second, meaning
itself is only created intersubjectively. While it can admittedly be
asserted by a single person, others have to confirm it before it can
become a part of culture. For this reason, the actual subject of
cultural production under the digital condition is not the individual
but rather the next-largest unit.
:::
:::
As an individual, it is impossible to orient oneself within a complex
environment. Meaning -- as well as the ability to act -- can only be
created, reinforced, and altered in exchange with others. This is
nothing noteworthy; biologically and culturally, people are social
beings. What has changed historically is how people are integrated into
larger contexts, how processes of exchange are organized, and what every
individual is expected to do in order to become a fully fledged
participant in these processes. For nearly 50 years, traditional
[]{#Page_79 type="pagebreak" title="79"}institutions -- that is,
hierarchically and bureaucratically organized civic institutions such
as established churches, labor unions, and political parties -- have
continuously been losing members.[^38^](#c2-note-0038){#c2-note-0038a}
In tandem with this, the overall commitment to the identities, family
values, and lifestyles promoted by these institutions has likewise been
in decline. The great mechanisms of socialization from the late stages
of the Gutenberg Galaxy have been losing more and more of their
influence, though at different speeds and to different extents. All
told, however, explicitly and collectively normative impulses are
decreasing, while others (implicitly economic, above all) are on the
rise. According to mainstream sociology, a cause or consequence of this
is the individualization and atomization of society. As early as the
middle of the 1980s, Ulrich Beck claimed: "In the individualized society
the individual must therefore learn, on pain of permanent disadvantage,
to conceive of himself or herself as the center of action, as the
planning office with respect to his/her own biography, abilities,
orientations, relationships and so
on."[^39^](#c2-note-0039){#c2-note-0039a} Over the past three decades,
the dominant neoliberal political orientation, with its strong stress on
the freedom of the individual -- to realize oneself as an individual
actor in the allegedly open market and in opposition to allegedly
domineering collective mechanisms -- has radicalized these tendencies
even further. The ability to act, however, is not only a question of
one\'s personal attitude but also of material resources. And it is this
same neoliberal politics that deprives so many people of the resources
needed to take advantage of these new freedoms in their own lives. As a
result they suffer, in Ulrich Beck\'s terms, "permanent disadvantage."
Under the digital condition, this process has permeated the finest
structures of social life. Individualization, commercialization, and the
production of differences (through design, for instance) are ubiquitous.
Established civic institutions are not alone in being hollowed out;
relatively new collectives are also becoming more differentiated, a
development that I outlined above with reference to the transformation
of the gay movement into the LGBT community. Yet nevertheless, or
perhaps for this very reason, new forms of communality are being formed
in these offshoots -- in the small activities of everyday life. And
these new communal formations -- rather []{#Page_80 type="pagebreak"
title="80"}than individual people -- are the actual subjects who create
the shared meaning that we call culture.
::: {.section}
### The problem of the "community" {#c2-sec-0010}
I have chosen the rather cumbersome expression "communal formation" in
order to avoid the term "community" (*Gemeinschaft*), although the
latter is used increasingly often in discussions of digital cultures and
has played an important role, from the beginning, in conceptions of
networking. Viewed analytically, however, "community" is a problematic
term because it is almost hopelessly overloaded. Particularly in the
German-speaking tradition, Ferdinand Tönnies\'s polar distinction
between "community" (*Gemeinschaft*) and "society" (*Gesellschaft*),
which he introduced in 1887, remains
influential.[^40^](#c2-note-0040){#c2-note-0040a} Tönnies contrasted two
fundamentally different and exclusive types of social relations. Whereas
community is characterized by the overlapping multidimensional nature of
social relationships, society is defined by the functional separation of
its sectors and spheres. Community embeds every individual into complex
social relationships, all of which tend to be simultaneously present. In
the traditional village community ("communities of place," in Tönnies\'s
terms), neighbors are involved with one another, for better or for
worse, both on a familiar basis and economically or religiously. Every
activity takes place on several different levels at the same time.
Communities are comprehensive social institutions that penetrate all
areas of life, endowing them with meaning. Through mutual dependency,
they create stability and security, but they also obstruct change and
hinder social mobility. Because everyone is connected with each other,
no can leave his or her place without calling into question the
arrangement as a whole. Communities are thus structurally conservative.
Because every human activity is embedded in multifaceted social
relationships, every change requires adjustments across the entire
interrelational web -- a task that is not easy to accomplish.
Accordingly, the traditional communities of the eighteenth and
nineteenth centuries fiercely opposed the establishment of capitalist
society. In order to impose the latter, the old community structures
were broken apart with considerable violence. This is what Marx
[]{#Page_81 type="pagebreak" title="81"}and Engels were referring to in
that famous passage from *The Communist Manifesto*: "All the settled,
age-old relations with their train of time-honoured preconceptions and
viewpoints are dissolved. \[...\] Everything feudal and fixed goes up in
smoke, everything sacred is
profaned."[^41^](#c2-note-0041){#c2-note-0041a}
The defining feature of society, on the contrary, is that it frees the
individual from such multifarious relationships. Society, according to
Tönnies, separates its members from one another. Although they
coordinate their activity with others, they do so in order to pursue
partial, short-term, and personal goals. Not only are people separated,
but so too are different areas of life. In a market-oriented society,
for instance, the economy is conceptualized as an independent sphere. It
can therefore break away from social connections to be organized simply
by limited formal or legal obligations between actors who, beyond these
obligations, have nothing else to do with one another. Costs or benefits
that inadvertently affect people who are uninvolved in a given market
transaction are referred to by economists as "externalities," and market
participants do not need to care about these because they are strictly
pursuing their own private interests. One of the consequences of this
form of social relationship is a heightened social dynamic, for now it
is possible to introduce changes into one area of life without
considering its effects on other areas. In the end, the dissolution of
mutual obligations, increased uncertainty, and the reduction of many
social connections go hand in hand with what Marx and Engels referred to
in *The Communist Manifesto* as "unfeeling hard cash."
From this perspective, the historical development looks like an
ambivalent process of modernization in which society (dynamic, but cold)
is erected over the ruins of community (static, but warm). This is an
unusual combination of romanticism and progress-oriented thinking, and
the problems with this influential perspective are numerous. There is,
first, the matter of its dichotomy; that is, its assumption that there
can only be these two types of arrangement, community and society. Or
there is the notion that the one form can be completely ousted by the
other, even though aspects of community and aspects of society exist at
the same time in specific historical situations, be it in harmony or in
conflict.[^42^](#c2-note-0042){#c2-note-0042a} []{#Page_82
type="pagebreak" title="82"}These impressions, however, which are so
firmly associated with the German concept of *Gemeinschaft*, make it
rather difficult to comprehend the new forms of communality that have
developed in the offshoots of networked life. This is because, at least
for now, these latter forms do not represent a genuine alternative to
societal types of social
connectedness.[^43^](#c2-note-0043){#c2-note-0043a} The English word
"community" is somewhat more open. The opposition between community and
society resonates with it as well, although the dichotomy is not as
clear-cut. American communitarianism, for instance, considers the
difference between community and society to be gradual and not
categorical. Its primary aim is to strengthen civic institutions and
mechanisms, and it regards community as an intermediary level between
the individual and society.[^44^](#c2-note-0044){#c2-note-0044a} But
there is a related English term, which seems even more productive for my
purposes, namely "community of practice," a concept that is more firmly
grounded in the empirical observation of concrete social relationships.
The term was introduced at the beginning of the 1990s by the social
researchers Jean Lave and Étienne Wenger. They observed that, in most
cases, professional learning (for instance, in their case study of
midwives) does not take place as a one-sided transfer of knowledge or
proficiency, but rather as an open exchange, often outside of the formal
learning environment, between people with different levels of knowledge
and experience. In this sense, learning is an activity that, though
distinguishable, cannot easily be separated from other "normal"
activities of everyday life. As Lave and Wenger stress, however, the
community of practice is not only a social space of exchange; it is
rather, and much more fundamentally, "an intrinsic condition for the
existence of knowledge, not least because it provides the interpretive
support necessary for making sense of its
heritage."[^45^](#c2-note-0045){#c2-note-0045a} Communities of practice
are thus always epistemic communities that form around certain ways of
looking at the world and one\'s own activity in it. What constitutes a
community of practice is thus the joint acquisition, development, and
preservation of a specific field of practice that contains abstract
knowledge, concrete proficiencies, the necessary material and social
resources, guidelines, expectations, and room to interpret one\'s own
activity. All members are active participants in the constitution of
this field, and this reinforces the stress on []{#Page_83
type="pagebreak" title="83"}practice. Each of them, however, brings
along different presuppositions and experiences, for their situations
are embedded within numerous and specific situations of life or work.
The processes within the community are mostly informal, and yet they are
thoroughly structured, for authority is distributed unequally and is
based on the extent to which the members value each other\'s (and their
own) levels of knowledge and experience. At first glance, then, the term
"community of practice" seems apt to describe the meaning-generating
communal formations that are at issue here. It is also somewhat
problematic, however, because, having since been subordinated to
management strategies, its use is now narrowly applied to professional
learning and managing knowledge.[^46^](#c2-note-0046){#c2-note-0046a}
From these various notions of community, it is possible to develop the
following way of looking at new types of communality: they are formed in
a field of practice, characterized by informal yet structured exchange,
focused on the generation of new ways of knowing and acting, and
maintained through the reflexive interpretation of their own activity.
This last point in particular -- the communal creation, preservation,
and alteration of the interpretive framework in which actions,
processes, and objects acquire a firm meaning and connection -- can be
seen as the central role of communal formations.
Communication is especially significant to them. Individuals must
continuously communicate in order to constitute themselves within the
fields and practices, or else they will remain invisible. The mass of
tweets, updates, emails, blogs, shared pictures, texts, posts on
collaborative platforms, and databases (etc.) that are necessary for
this can only be produced and processed by means of digital
technologies. In this act of incessant communication, which is a
constitutive element of social existence, the personal desire for
self-constitution and orientation becomes enmeshed with the outward
pressure of having to be present and available to form a new and binding
set of requirements. This relation between inward motivation and outward
pressure can vary highly, depending on the character of the communal
formation and the position of the individual within it (although it is
not the individual who determines what successful communication is, what
represents a contribution to the communal formation, or in which form
one has to be present). []{#Page_84 type="pagebreak" title="84"}Such
decisions are made by other members of the formation in the form of
positive or negative feedback (or none at all), and they are made with
recourse to the interpretive framework that has been developed in
common. These communal and continuous acts of learning, practicing, and
orientation -- the exchange, that is, between "novices" and "experts" on
the same field, be it concerned with internet politics, illegal street
racing, extreme right-wing music, body modification, or a free
encyclopedia -- serve to maintain the framework of shared meaning,
expand the constituted field, recruit new members, and adapt the
framework of interpretation and activity to changing conditions. Such
communal formations constitute themselves; they preserve and modify
themselves by constantly working out the foundations of their
constitution. This may sound circular, for the process of reflexive
self-constitution -- "autopoiesis" in the language of systems theory --
is circular in the sense that control is maintained through continuous,
self-generating feedback. Self-referentiality is a structural feature of
these formations.
:::
::: {.section}
### Singularity and communality {#c2-sec-0011}
The new communal formations are informal forms of organization that are
based on voluntary action. No one is born into them, and no one
possesses the authority to force anyone else to join or remain against
his or her will, or to assign anyone with tasks that he or she might be
unwilling to do. Such a formation is not an enclosed disciplinary
institution in Foucault\'s sense,[^47^](#c2-note-0047){#c2-note-0047a}
and, within it, power is not exercised through commands, as in the
classical sense formulated by Max
Weber.[^48^](#c2-note-0048){#c2-note-0048a} The condition of not being
locked up and not being subordinated can, at least at first, represent
for the individual a gain in freedom. Under a given set of conditions,
everyone can (and must) choose which formations to participate in, and
he or she, in doing so, will have a better or worse chance to influence
the communal field of reference.
On the everyday level of communicative self-constitution and creating a
personal cognitive horizon -- in innumerable streams, updates, and
timelines on social mass media -- the most important resource is the
attention of others; that is, their feedback and the mutual recognition
that results from it. []{#Page_85 type="pagebreak" title="85"}And this
recognition may simply be in the form of a quickly clicked "like," which
is the smallest unit that can assure the sender that, somewhere out
there, there is a receiver. Without the latter, communication has no
meaning. The situation is somewhat menacing if no one clicks the "like"
button beneath a post or a photo. It is a sign that communication has
broken, and the result is the dissolution of one\'s own communicatively
constituted social existence. In this context, the boundaries are
blurred between the categories of information, communication, and
activity. Making information available always involves the active --
that is, communicating -- person, and not only in the case of ubiquitous
selfies, for in an overwhelming and chaotic environment, as discussed
above, selection itself is of such central importance that the
differences between the selected and the selecting become fluid,
particularly when the goal of the latter is to experience confirmation
from others. In this back-and-forth between one\'s own presence and the
validation of others, one\'s own motives and those of the community are
not in opposition but rather mutually depend on one another. Condensed
to simple norms and to a basic set of guidelines within the context of
an image-oriented social mass media service, the rule (or better:
friendly tip) that one need not but probably ought to follow is this:
::: {.extract}
Be an active member of the Instagram community to receive likes and
comments. Take time to comment on a friend\'s photo, or to like photos.
If you do this, others will reciprocate. If you never acknowledge your
followers\' photos, then they won\'t acknowledge
you.[^49^](#c2-note-0049){#c2-note-0049a}
:::
The context of this widespread and highly conventional piece of advice
is not, for instance, a professional marketing campaign; it is simply
about personally positioning oneself within a social network. The goal
is to establish one\'s own, singular, identity. The process required to
do so is not primarily inward-oriented; it is not based on questions
such as: "Who am I really, apart from external influences?" It is rather
outward-oriented. It takes place through making connections with others
and is concerned with questions such as: "Who is in my network, and what
is my position within it?" It is []{#Page_86 type="pagebreak"
title="86"}revealing that none of the tips in the collection cited above
offers advice about achieving success within a community of
photographers; there are not suggestions, for instance, about how to
take high-quality photographs. With smart cameras and built-in filters
for post-production, this is not especially challenging any more,
especially because individual pictures, to be examined closely and on
their own terms, have become less important gauges of value than streams
of images that are meant to be quickly scrolled through. Moreover, the
function of the critic, who once monopolized the right to interpret and
evaluate an image for everyone, is no longer of much significance.
Instead, the quality of a picture is primarily judged according to
whether "others like it"; that is, according to its performance in the
ongoing popularity contest within a specific niche. But users do not
rely on communal formations and the feedback they provide just for the
sharing and evaluation of pictures. Rather, this dynamic has come to
determine more and more facets of life. Users experience the
constitution of singularity and communality, in which a person can be
perceived as such, as simultaneous and reciprocal processes. A million
times over and nearly subconsciously (because it is so commonplace),
they engage in a relationship between the individual and others that no
longer really corresponds to the liberal opposition between
individuality and society, between personal and group identity. Instead
of viewing themselves as exclusive entities (either in terms of the
emphatic affirmation of individuality or its dissolution within a
homogeneous group), the new formations require that the production of
difference and commonality takes place
simultaneously.[^50^](#c2-note-0050){#c2-note-0050a}
:::
::: {.section}
### Authenticity and subjectivity {#c2-sec-0012}
Because members have decided to participate voluntarily in the
community, their expressions and actions are regarded as authentic, for
it is implicitly assumed that, in making these gestures, they are not
following anyone else\'s instructions but rather their own motivations.
The individual does not act as a representative or functionary of an
organization but rather as a private and singular (that is, unique)
person. While at a gathering of the Occupy movement, a sure way to be
kicked out to is to stick stubbornly to a party line, even if this way
[]{#Page_87 type="pagebreak" title="87"}of thinking happens to agree
with that of the movement. Not only at Occupy gatherings, however, but
in all new communal formations it is expected that everyone there is
representing his or her own interests. As most people are aware, this
assumption is theoretically naïve and often proves to be false in
practice. Even spontaneity can be calculated, and in many cases it is.
Nevertheless, the expectation of authenticity is relevant because it
creates a minimum of trust. As the basis of social trust, such
contra-factual expectations exist elsewhere as well. Critical readers of
newspapers, for instance, must assume that what they are reading has
been well researched and is presented as objectively as possible, even
though they know that objectivity is theoretically a highly problematic
concept -- to this extent, postmodern theory has become common knowledge
-- and that newspapers often pursue (hidden) interests or lead
campaigns. Yet without such contra-factual assumptions, the respective
orders of knowledge and communication would not function, for they
provide the normative framework within which deviations can be
perceived, criticized, and sanctioned.
In a seemingly traditional manner, the "authentic self" is formulated
with reference to one\'s inner world, for instance to personal
knowledge, interests, or desires. As the core of personality, however,
this inner world no longer represents an immutable and essential
characteristic but rather a temporary position. Today, even someone\'s
radical reinvention can be regarded as authentic. This is the central
difference from the classical, bourgeois conception of the subject. The
self is no longer understood in essentialist terms but rather
performatively. Accordingly, the main demand on the individual who
voluntarily opts to participate in a communal formation is no longer to
be self-aware but rather to be
self-motivated.[^51^](#c2-note-0051){#c2-note-0051a} Nor is it necessary
any more for one\'s core self to be coherent. It is not a contradiction
to appear in various communal formations, each different from the next,
as a different "I myself," for every formation is comprehensive, in that
it appeals to the whole person, and simultaneously partial, in that it
is oriented toward a particular goal and not toward all areas of life.
As in the case of re-mixes and other referential processes, the concern
here is not to preserve authenticity but rather to create it in the
moment. The success or failure []{#Page_88 type="pagebreak"
title="88"}of these efforts is determined by the continuous feedback of
others -- one like after another.
These practices have led to a modified form of subject constitution for
which some sociologists, engaged in empirical research, have introduced
the term "networked individualism."[^52^](#c2-note-0052){#c2-note-0052a}
The idea is based on the observation that people in Western societies
(the case studies were mostly in North America) are defining their
identity less and less by their family, profession, or other stable
collective, but rather increasingly in terms of their personal social
networks; that is, according to the communal formations in which they
are active as individuals and in which they are perceived as singular
people. In this regard, individualization and atomization no longer
necessarily go hand in hand. On the contrary, the intertwined nature of
personal identity and communality can be experienced on an everyday
level, given that both are continuously created, adapted, and affirmed
by means of personal communication. This makes the networks in question
simultaneously fragile and stable. Fragile because they require the
ongoing presence of every individual and because communication can break
down quickly. Stable because the networks of relationships that can
support a single person -- as regards the number of those included,
their geographical distribution, and the duration of their cohesion --
have expanded enormously by means of digital communication technologies.
Here the issue is not that of close friendships, whose number remains
relatively constant for most people and over long periods of
time,[^53^](#c2-note-0053){#c2-note-0053a} but rather so-called "weak
ties"; that is, more or less loose acquaintances that can be tapped for
new information and resources that do not exist within one\'s close
circle of friends.[^54^](#c2-note-0054){#c2-note-0054a} The more they
are expanded, the more sustainable and valuable these networks become,
for they bring together a large number of people and thus multiply the
material and organizational resources that are (potentially) accessible
to the individual. It is impossible to make a sweeping statement as to
whether these formations actually represent communities in a
comprehensive sense and how stable they really are, especially in times
of crisis, for this is something that can only be found out on a
case-by-case basis. It is relevant that the development of personal
networks []{#Page_89 type="pagebreak" title="89"}has not taken place in
a vacuum. The disintegration of institutions that were formerly
influential in the formation of identity and meaning began long before
the large-scale spread of networks. For most people, there is no other
choice but to attempt to orient and organize oneself, regardless of how
provisional or uncertain this may be. Or, as Manuel Castells somewhat
melodramatically put it, "At the turn of the millennium, the king and
the queen, the state and civil society, are both naked, and their
children-citizens are wandering around a variety of foster
homes."[^55^](#c2-note-0055){#c2-note-0055a}
:::
::: {.section}
### Space and time as a communal practice {#c2-sec-0013}
Although participation in a communal formation is voluntary, it is not
unselfish. Quite the contrary: an important motivation is to gain access
to a formation\'s constitutive field of practice and to the resources
associated with it. A communal formation ultimately does more than
simply steer the attention of its members toward one another. Through
the common production of culture, it also structures how the members
perceive the world and how they are able to design themselves and their
potential actions in it. It is thus a cooperative mechanism of
filtering, interpretation, and constitution. Through the everyday
referential work of its members, the community selects a manageable
amount of information from the excess of potentially available
information and brings it into a meaningful context, whereby it
validates the selection itself and orients the activity of each of its
members.
The new communal formations consist of self-referential worlds whose
constructive common practice affects the foundations of social activity
itself -- the constitution of space and time. How? The spatio-temporal
horizon of digital communication is a global (that is, placeless) and
ongoing present. The technical vision of digital communication is always
the here and now. With the instant transmission of information,
everything that is not "here" is inaccessible and everything that is not
"now" has disappeared. Powerful infrastructure has been built to achieve
these effects: data centers, intercontinental networks of cables,
satellites, high-performance nodes, and much more. Through globalized
high-frequency trading, actors in the financial markets have realized
this []{#Page_90 type="pagebreak" title="90"}technical vision to its
broadest extent by creating a never-ending global present whose expanse
is confined to milliseconds. This process is far from coming to an end,
for massive amounts of investment are allocated to accomplish even the
smallest steps toward this goal. On November 3, 2015, a 4,600-kilometer,
300-million-dollar transatlantic telecommunications cable (Hibernia
Express) was put into operation between London and New York -- the first
in more than 10 years -- with the single goal of accelerating automated
trading between the two places by 5.2 milliseconds.
For social and biological processes, this technical horizon of space and
time is neither achievable nor desirable. Such processes, on the
contrary, are existentially dependent on other spatial and temporal
orders. Yet because of the existence of this non-geographical and
atemporal horizon, the need -- as well as the possibility -- has arisen
to redefine the parameters of space and time themselves in order to
counteract the mire of technically defined spacelessness and
timelessness. If space and time are not simply to vanish in this
spaceless, ongoing present, how then should they be defined? Communal
formations create spaces for action not least by determining their own
geographies and temporal rhythms. They negotiate what is near and far
and also which places are disregarded (that is, not even perceived). If
every place is communicatively (and physically) reachable, every person
must decide which place he or she would like to reach in practice. This,
however, is not an individual decision but rather a task that can only
be approached collectively. Those places which are important and thus
near are determined by communal formations. This takes place in the form
of a rough consensus through the blogs that "one" has to read, the
exhibits that "one" has to see, the events and conferences that "one"
has to attend, the places that "one" has to visit before they are
overrun by tourists, the crises in which "the West" has to intervene,
the targets that "lend themselves" to a terrorist attack, and so on. On
its own, however, selection is not enough. Communal formations are
especially powerful when they generate the material and organizational
resources that are necessary for their members to implement their shared
worldview through actions -- to visit, for instance, the places that
have been chosen as important. This can happen if they enable access
[]{#Page_91 type="pagebreak" title="91"}to stipends, donations, price
reductions, ride shares, places to stay, tips, links, insider knowledge,
public funds, airlifts, explosives, and so on. It is in this way that
each formation creates its respective spatial constructs, which define
distances in a great variety of ways. At the same time that war-torn
Syria is unreachably distant even for seasoned reporters and their
staff, veritable travel agencies are being set up in order to bring
Western jihadists there in large numbers.
Things are similar for the temporal dimensions of social and biological
processes. Permanent presence is a temporality that is inimical to life
but, under its influence, temporal rhythms have to be redefined as well.
What counts as fast? What counts as slow? In what order should things
proceed? On the everyday level, for instance, the matter can be as
simple as how quickly to respond to an email. Because the transmission
of information hardly takes any time, every delay is a purely social
creation. But how much is acceptable? There can be no uniform answer to
this. The members of each communal formation have to negotiate their own
rules with one another, even in areas of life that are otherwise highly
formalized. In an interview with the magazine *Zeit*, for instance, a
lawyer with expertise in labor law was asked whether a boss may require
employees to be reachable at all times. Instead of answering by
referring to any binding legal standards, the lawyer casually advised
that this was a matter of flexible negotiation: "Express your misgivings
openly and honestly about having to be reachable after hours and,
together with your boss, come up with an agreeable rule to
follow."[^56^](#c2-note-0056){#c2-note-0056a} If only it were that easy.
Temporalities that, in many areas, were once simply taken for granted by
everyone on account of the factuality of things now have to be
culturally determined -- that is, explicitly negotiated -- in a greater
number of contexts. Under the conditions of capitalism, which is always
creating new competitions and incentives, one consequence is the
often-lamented "acceleration of time." We are asked to produce, consume,
or accomplish more and more in less and less
time.[^57^](#c2-note-0057){#c2-note-0057a} This change in the
structuring of time is not limited to linear acceleration. It reaches
deep into the foundations of life and has even reconfigured biological
processes themselves. Today there is an entire industry that specializes
in freezing the stem []{#Page_92 type="pagebreak" title="92"}cells of
newborns in liquid nitrogen -- that is, in suspending cellular
biological time -- in case they might be needed later on in life for a
transplant or for the creation of artificial organs. Children can be
born even if their physical mothers are already dead. Or they can be
"produced" from ova that have been stored for many years at minus 196
degrees.[^58^](#c2-note-0058){#c2-note-0058a} At the same time,
questions now have to be addressed every day whose grand temporal
dimensions were once the matter of myth. In the case of atomic energy,
for instance, there is the issue of permanent disposal. Where can we
deposit nuclear waste for the next hundred thousand years without it
causing catastrophic damage? How can the radioactive material even be
transported there, wherever that is, within the framework of everday
traffic laws?[^59^](#c2-note-0059){#c2-note-0059a}
The construction of temporal dimensions and sequences has thus become an
everyday cultural question. Whereas throughout Europe, for example,
committees of experts and ethicists still meet to discuss reproductive
medicine and offer their various recommendations, many couples are
concerned with the specific question of whether or how they can fulfill
their wish to have children. Without a coherent set of rules, questions
such as these have to be answered by each individual with recourse to
his or her personally relevant communal formation. If there is no
cultural framework that at least claims to be binding for everyone, then
the individual must negotiate independently within each communal
formation with the goal of acquiring the resources necessary to act
according to communal values and objectives.
:::
These three functions -- selection, interpretation, and the constitutive
ability to act -- make communal formations the true subject of the
digital condition. In principle, these functions are nothing new;
rather, they are typical of fields that are organized without reference
to external or irrefutable authorities. The state of scholarship, for
instance, is determined by what is circulated in refereed publications.
In this case, "refereed" means that scientists at the same professional
rank mutually evaluate each other\'s work. The scientific community (or
better: the sub-community of a specialized discourse) []{#Page_93
type="pagebreak" title="93"}evaluates the contributions of individual
scholars. They decide what should be considered valuable, and this
consensus can theoretically be revised at any time. It is based on a
particular catalog of criteria, on an interpretive framework that
provides lines of inquiry, methods, appraisals, and conventions of
presentation. With every article, this framework is confirmed and
reconstituted. If the framework changes, this can lead in the most
extreme case to a paradigm shift, which overturns fundamental
orientations, assumptions, and
certainties.[^60^](#c2-note-0060){#c2-note-0060a} The result of this is
not only a change in how scientific contributions are evaluated but also
a change in how the external world is perceived and what activities are
possible in it. Precisely because the sciences claim to define
themselves, they have the ability to revise their own foundations.
The sciences were the first large sphere of society to achieve
comprehensive cultural autonomy; that is, the ability to determine its
own binding meaning. Art was the second that began to organize itself on
the basis of internal feedback. It was during the era of Romanticism
that artists first laid claim to autonomy. They demanded "to absolve art
from all conditions, to represent it as a realm -- indeed as the only
realm -- in which truth and beauty are expressed in their pure form, a
realm in which everything truly human is
transcended."[^61^](#c2-note-0061){#c2-note-0061a} With the spread of
photography in the second half of the nineteenth century, art also
liberated itself from its final task, which was hoisted upon it from the
outside, namely the need to represent external reality. Instead of
having to represent the external world, artists could now focus on their
own subjectivity. This gave rise to a radical individualism, which found
its clearest summation in Marcel Duchamp\'s assertion that only the
artist could determine what is art. This he claimed in 1917 by way of
explaining how an industrially produced urinal, exhibited as a signed
piece with the title "Fountain," could be considered a work of art.
With the rise of the knowledge economy and the expansion of cultural
fields, including the field of art and the artists active within it,
this individualism quickly swelled to unmanageable levels. As a
consequence, the task of defining what should be regarded as art shifted
from the individual artist to the curator. It now fell upon the latter
to select a few works from the surplus of competing scenes and thus
bring temporary []{#Page_94 type="pagebreak" title="94"}order to the
constantly diversifying and changing world of contemporary art. This
order was then given expression in the form of exhibits, which were
intended to be more than the sum of their parts. The beginning of this
practice can be traced to the 1969 exhibition When Attitudes Become
Form, which was curated by Harald Szeemann for the Kunsthalle Bern (it
was also sponsored by Philip Morris). The works were not neatly
separated from one another and presented without reference to their
environment, but were connected with each other both spatially and in
terms of their content. The effect of the exhibition could be felt at
least as much through the collection of works as a whole as it could
through the individual pieces, many of which had been specially
commissioned for the exhibition itself. It not only cemented Szeemann\'s
reputation as one of the most significant curators of the twentieth
century; it also completely redefined the function of the curator as a
central figure within the art system.
This was more than 40 years ago and in a system that functioned
differently from that of today. The distance from this exhibition, but
also its ongoing relevance, was negotiated, significantly, in a
re-enactment at the 2013 Biennale in Venice. For this, the old rooms at
the Kunsthalle Bern were reconstructed in the space of the Fondazione
Prada in such a way that both could be seen simultaneously. As is
typical with such re-enactments, the curators of the project described
its goals in terms of appropriation and distancing: "This was the
challenge: how could we find and communicate a limit to a non-limit,
creating a place that would reflect exactly the architectural structures
of the Kunsthalle, but also an asymmetrical space with respect to our
time and imbued with an energy and tension equivalent to that felt at
Bern?"[^62^](#c2-note-0062){#c2-note-0062a}
Curation -- that is, selecting works and associating them with one
another -- has become an omnipresent practice in the art system. No
exhibition takes place any more without a curator. Nevertheless,
curators have lost their extraordinary
position,[^63^](#c2-note-0063){#c2-note-0063a} with artists taking on
more of this work themselves, not only because the boundaries between
artistic and curatorial activities have become fluid but also because
many artists explicitly co-produce the context of their work by
incorporating a multitude of references into their pieces. It is with
precisely this in mind that André Rottmann, in the []{#Page_95
type="pagebreak" title="95"}quotation cited at the beginning of this
chapter, can assert that referentiality has become the dominant
production-aesthetic model in contemporary art. This practice enables
artists to objectify themselves by explicitly placing themselves into a
historical and social context. At the same time, it also enables them to
subjectify the historical and social context by taking the liberty to
select and arrange the references
themselves.[^64^](#c2-note-0064){#c2-note-0064a}
Such strategies are no longer specific to art. Self-generated spaces of
reference and agency are now deeply embedded in everyday life. The
reason for this is that a growing number of questions can no longer be
answered in a generally binding way (such as those about what
constitutes fine art), while the enormous expansion of the cultural
requires explicit decisions to be made in more aspects of life. The
reaction to this dilemma has been radical subjectivation. This has not,
however, been taking place at the level of the individual but rather at
that of communal formations. There is now a patchwork of answers to
large questions and a multitude of reactions to large challenges, all of
which are limited in terms of their reliability and scope.
:::
Even though participation in new formations is voluntary and serves the
interests of their members, it is not without preconditions. The most
important of these is acceptance, the willing adoption of the
interpretive framework that is generated by the communal formation. The
latter is formed from the social, cultural, legal, and technical
protocols that lend to each of these formations its concrete
constitution and specific character. Protocols are common sets of rules;
they establish, according to the network theorist Alexander Galloway,
"the essential points necessary to enact an agreed-upon standard of
action." They provide, he goes on, "etiquette for autonomous
agents."[^65^](#c2-note-0065){#c2-note-0065a} Protocols are
simultaneously voluntary and binding; they allow actors to meet
eye-to-eye instead of entering into hierarchical relations with one
another. If everyone voluntarily complies with the protocols, then it is
not necessary for one actor to give instructions to another. Whoever
accepts the relevant protocols can interact with others who do the same;
whoever opts not to []{#Page_96 type="pagebreak" title="96"}accept them
will remain on the outside. Protocols establish, for example, common
languages, technical standards, or social conventions. The fundamental
protocol for the internet is the Transmission Control Protocol/Internet
Protocol (TCP/IP). This suite of protocols defines the common language
for exchanging data. Every device that exchanges information over the
internet -- be it a smartphone, a supercomputer in a data center, or a
networked thermostat -- has to use these protocols. In growing areas of
social contexts, the common language is English. Whoever wishes to
belong has to speak it increasingly often. In the natural sciences,
communication now takes place almost exclusively in English. Non-native
speakers who accept this norm may pay a high price: they have to learn a
new language and continually improve their command of it or else resign
themselves to being unable to articulate things as they would like --
not to mention losing the possibility of expressing something for which
another language would perhaps be more suitable, or forfeiting
traditions that cannot be expressed in English. But those who refuse to
go along with these norms pay an even higher price, risking
self-marginalization. Those who "voluntarily" accept conventions gain
access to a field of practice, even though within this field they may be
structurally disadvantaged. But unwillingness to accept such
conventions, with subsequent denial of access to this field, might have
even greater disadvantages.[^66^](#c2-note-0066){#c2-note-0066a}
In everyday life, the factors involved with this trade-off are often
presented in the form of subtle cultural codes. For instance, in order
to participate in a project devoted to the development of free software,
it is not enough for someone to possess the necessary technical
knowledge; he or she must also be able to fit into a wide-ranging
informal culture with a characteristic style of expression, humor, and
preferences. Ultimately, software developers do not form a professional
corps in the traditional sense -- in which functionaries meet one
another in the narrow and regulated domain of their profession -- but
rather a communal formation in which the engagement of the whole person,
both one\'s professional and social self, is scrutinized. The
abolishment of the separation between different spheres of life,
requiring interaction of a more holistic nature, is in fact a key
attraction of []{#Page_97 type="pagebreak" title="97"}these communal
formations and is experienced by some as a genuine gain in freedom. In
this situation, one is no longer subjected to rules imposed from above
but rather one is allowed to -- and indeed ought to -- be authentically
pursuing his or her own interests.
But for others the experience can be quite the opposite because the
informality of the communal formation also allows forms of exclusion and
discrimination that are no longer acceptable in formally organized
realms of society. Discrimination is more difficult to identify when it
takes place within the framework of voluntary togetherness, for no one
is forced to participate. If you feel uncomfortable or unwelcome, you
are free to leave at any time. But this is a specious argument. The
areas of free software or Wikipedia are difficult places for women. In
these clubby atmospheres of informality, they are often faced with
blatant sexism, and this is one of the reasons why many women choose to
stay away from such projects.[^67^](#c2-note-0067){#c2-note-0067a} In
2007, according to estimates by the American National Center for Women &
Information Technology, whereas approximately 27 percent of all jobs
related to computer science were held by women, their representation at
the same time was far lower in the field of free software -- on average
less than 2 percent. And for years, the proportion of women who edit
texts on Wikipedia has hovered at around 10
percent.[^68^](#c2-note-0068){#c2-note-0068a}
The consequences of such widespread, informal, and elusive
discrimination are not limited to the fact that certain values and
prejudices of the shared culture are included in these products, while
different viewpoints and areas of knowledge are
excluded.[^69^](#c2-note-0069){#c2-note-0069a} What is more, those who
are excluded or do not wish to expose themselves to discrimination (and
thus do not even bother to participate in any communal formations) do
not receive access to the resources that circulate there (attention and
support, valuable and timely knowledge, or job offers). Many people are
thus faced with the choice of either enduring the discrimination within
a community or remaining on the outside and thus invisible. That this
decision is made on a voluntary basis and on one\'s own responsibility
hardly mitigates the coercive nature of the situation. There may be a
choice, but it would be misleading to call it a free one.[]{#Page_98
type="pagebreak" title="98"}
:::
::: {.section}
### The power of sociability {#c2-sec-0016}
In order to explain the peculiar coercive nature of the (nominally)
voluntary acceptance of protocols, rules, and norms, the political
scientist David Singh Grewal, drawing on the work of Max Weber and
Michel Foucault, has distinguished between the "power of sovereignty"
and the "power of sociability."[^70^](#c2-note-0070){#c2-note-0070a}
The former develops on the basis of dominance and subordination, as
imposed by authorities, police officers, judges, or other figures within
formal hierarchies. Their power is anchored in disciplinary
institutions, and the dictum of this sort of power is: "You must!" The
power of sociability, on the contrary, functions by prescribing the
conditions or protocols under which people are able to enter into an
exchange with one another. The dictum of this sort of power is: "You
can!" The more people accept certain protocols and standards, the more
powerful these become. Accordingly, the sociability that they structure
also becomes more comprehensive, and those not yet involved have to ask
themselves all the more urgently whether they can afford not to accept
these protocols and standards. Whereas the first type of power is
ultimately based on the monopoly of violence and on repression, the
second is founded on voluntary submission. When the entire internet
speaks TCP/IP, then an individual\'s decision to use it may be voluntary
in nominal terms, but at the same time it is an indispensable
precondition for existing within the network at all. Protocols exert
power without there having to be anyone present to possess the power in
question. Whereas the sovereign can be located, the effects of
sociability\'s power are diffuse and omnipresent. They are not
repressive but rather constitutive. No one forces a scientist to publish
in English or a woman editor to tolerate disparaging remarks on
Wikipedia. People accept these often implicit behavioral norms (sexist
comments are permitted, for instance) out of their own interests in
order to acquire access to the resources circulating within the networks
and to constitute themselves within it. In this regard, Singh
distinguishes between the "intrinsic" and "extrinsic" reasons for
abiding by certain protocols.[^71^](#c2-note-0071){#c2-note-0071a} In
the first case, the motivation is based on a new protocol being better
suited than existing protocols for carrying out []{#Page_99
type="pagebreak" title="99"}a specific objective. People thus submit
themselves to certain rules because they are especially efficient,
transparent, or easy to use. In the second case, a protocol is accepted
not because but in spite of its features. It is simply a precondition
for gaining access to a space of agency in which resources and
opportunities are available that cannot be found anywhere else. In the
first case, it is possible to speak subjectively of voluntariness,
whereas the second involves some experience of impersonal compunction.
One is forced to do something that might potentially entail grave
disadvantages in order to have access, at least, to another level of
opportunities or to create other advantages for oneself.
:::
::: {.section}
### Homogeneity, difference and authority {#c2-sec-0017}
Protocols are present on more than a technical level; as interpretive
frameworks, they structure viewpoints, rules, and patterns of behavior
on all levels. Thus, they provide a degree of cultural homogeneity, a
set of commonalities that lend these new formations their communal
nature. Viewed from the outside, these formations therefore seem
inclined toward consensus and uniformity, for their members have already
accepted and internalized certain aspects in common -- the protocols
that enable exchange itself -- whereas everyone on the outside has not
done so. When everyone is speaking in English, the conversation sounds
quite monotonous to someone who does not speak the language.
Viewed from the inside, the experience is something different: in order
to constitute oneself within a communal formation, not only does one
have to accept its rules voluntarily and in a self-motivated manner; one
also has to make contributions to the reproduction and development of
the field. Everyone is urged to contribute something; that is, to
produce, on the basis of commonalities, differences that simultaneously
affirm, modify, and enhance these commonalities. This leads to a
pronounced and occasionally highly competitive internal differentiation
that can only be understood, however, by someone who has accepted the
commonalities. To an outsider, this differentiation will seem
irrelevant. Whoever is not well versed in the universe of *Star Wars*
will not understand why the various character interpretations at
[]{#Page_100 type="pagebreak" title="100"}cosplay conventions, which I
discussed above, might be brilliant or even controversial. To such a
person, they will all seem equally boring and superficial.
These formations structure themselves internally through the production
of differences; that is, by constantly changing their common ground.
Those who are able to add many novel aspects to the common resources
gain a degree of authority. They assume central positions and they
influence, through their behavior, the development of the field more
than others do. However, their authority, influence, and de facto power
are not based on any means of coercion. As Niklas Luhmann noted, "In the
end, one participant\'s achievements in making selections \[...\] are
accepted by another participant \[...\] as a limitation of the latter\'s
potential experiences and activities without him having to make the
selection on his own."[^72^](#c2-note-0072){#c2-note-0072a} Even this is
a voluntary and self-interested act: the members of the formation
recognize that this person has contributed more to the common field and
to the resources within it. This, in turn, is to everyone\'s advantage,
for each member would ultimately like to make use of the field\'s
resources to achieve his or her own goals. This arrangement, which can
certainly take on hierarchical qualities, is experienced as something
meritocratically legitimized and voluntarily
accepted.[^73^](#c2-note-0073){#c2-note-0073a} In the context of free
software, there has therefore been some discussion of "benevolent
dictators."[^74^](#c2-note-0074){#c2-note-0074a} The matter of
"dictators" is raised because projects are often led by charismatic
figures without a formal mandate. They are "benevolent" because their
position of authority is based on the fact that a critical mass of
participating producers has voluntarily subordinated itself for its own
self-interest. If the consensus breaks over whose contributions have
been carrying the most weight, then the formation will be at risk of
losing its internal structure and splitting apart ("forking," in the
jargon of free software).
:::
:::
Through personal communication, referential processes in communal
formations create cultural zones of various sizes and scopes. They
expand into the empty spaces that have been created by the erosion of
established institutions and []{#Page_101 type="pagebreak"
title="101"}processes, and once these new processes have been
established the process of erosion intensifies. Multiple processes of
exchange take place alongside one another, creating a patchwork of
interconnected, competing, or entirely unrelated spheres of meaning,
each with specific goals and resources and its own preconditions and
potentials. The structures of knowledge, order, and activity that are
generated by this are holistic as well as partial and limited. The
participants in such structures are simultaneously addressed on many
levels that were once functionally separated; previously independent
spheres, such as work and leisure, are now mixed together, but usually
only with respect to the subdivisions of one\'s own life. And, at first,
the structures established in this way are binding only for active
participants.
::: {.section}
### Exiting the "Library of Babel" {#c2-sec-0019}
For one person alone, however, these new processes would not be able to
generate more than a local island of meaning from the enormous clamor of
chaotic spheres of information. In his 1941 story "The Library of
Babel," Jorge Luis Borges fashioned a fitting image for such a
situation. He depicts the world as a library of unfathomable and
possibly infinite magnitude. The characters in the story do not know
whether there is a world outside of the library. There are reasons to
believe that there is, and reasons that suggest otherwise. The library
houses the complete collection of all possible books that can be written
on exactly 410 pages. Contained in these volumes is the promise that
there is "no personal or universal problem whose eloquent solution
\[does\] not exist," for every possible combination of letters, and thus
also every possible pronouncement, is recorded in one book or another.
No catalog has yet been found for the library (though it must exist
somewhere), and it is impossible to identify any order in its
arrangement of books. The "men of the library," according to Borges,
wander round in search of the one book that explains everything, but
their actual discoveries are far more modest. Only once in a while are
books found that contain more than haphazard combinations of signs. Even
small regularities within excerpts of texts are heralded as sensational
discoveries, and it is around these discoveries that competing
[]{#Page_102 type="pagebreak" title="102"}schools of interpretation
develop. Despite much labor and effort, however, the knowledge gained is
minimal and fragmentary, so the prevailing attitude in the library is
bleak. By the time of the narrator\'s generation, "nobody expects to
discover anything."[^75^](#c2-note-0075){#c2-note-0075a}
Although this vision has now been achieved from a quantitative
perspective -- no one can survey the "library" of digital information,
which in practical terms is infinitely large, and all of the growth
curves continue to climb steeply -- today\'s cultural reality is
nevertheless entirely different from that described by Borges. Our
ability to deal with massive amounts of data has radically improved, and
thus our faith in the utility of information is not only unbroken but
rather gaining strength. What is new is precisely such large quantities
of data ("big data"), which, as we are promised or forewarned, will lead
to new knowledge, to a comprehensive understanding of the world, indeed
even to "omniscience."[^76^](#c2-note-0076){#c2-note-0076a} This faith
in data is based above all on the fact that the two processes described
above -- referentiality and communality -- are not the only new
mechanisms for filtering, sorting, aggregating, and evaluating things.
Beneath or ahead of the social mechanisms of decentralized and networked
cultural production, there are algorithmic processes that pre-sort the
immeasurably large volumes of data and convert them into a format that
can be apprehended by individuals, evaluated by communities, and
invested with meaning.
Strictly speaking, it is impossible to maintain a categorical
distinction between social processes that take place in and by means of
technological infrastructures and technical processes that are socially
constructed. In both cases, social actors attempt to realize their own
interests with the resources at their disposal. The methods of
(attempted) realization, the available resources, and the formulation of
interests mutually influence one another. The technological resources
are inscribed in the formulation of goals. These open up fields of
imagination and desire, which in turn inspire technical
development.[^77^](#c2-note-0077){#c2-note-0077a} Although it is
impossible to draw clear theoretical lines, the attempt to make such a
distinction can nevertheless be productive in practice, for in this way
it is possible to gain different perspectives about the same object of
investigation.[]{#Page_103 type="pagebreak" title="103"}
:::
::: {.section}
### The rise of algorithms {#c2-sec-0020}
An algorithm is a set of instructions for converting a given input into
a desired output by means of a finite number of steps: algorithms are
used to solve predefined problems. For a set of instructions to become
an algorithm, it has to be determined in three different respects.
First, the necessary steps -- individually and as a whole -- have to be
described unambiguously and completely. To do this, it is usually
necessary to use a formal language, such as mathematics, or a
programming language, in order to avoid the characteristic imprecision
and ambiguity of natural language and to ensure instructions can be
followed without interpretation. Second, it must be possible in practice
to execute the individual steps together. For this reason, every
algorithm is tied to the context of its realization. If the context
changes, so do the operating processes that can be formalized as
algorithms and thus also the ways in which algorithms can partake in the
constitution of the world. Third, it must be possible to execute an
operating instruction mechanically so that, under fixed conditions, it
always produces the same result.
Defined in such general terms, it would also be possible to understand
the instruction manual for a typical piece of Ikea furniture as an
algorithm. It is a set of instructions for creating, with a finite
number of steps, a specific and predefined piece of furniture (output)
from a box full of individual components (input). The instructions are
composed in a formal language, pictograms, which define each step as
unambiguously as possible, and they can be executed by a single person
with simple tools. The process can be repeated, for the final result is
always the same: a Billy box will always yield a Billy shelf. In this
case, a person takes over the role of a machine, which (unambiguous
pictograms aside) can lead to problems, be it that scratches and other
traces on the finished piece of furniture testify to the unique nature
of the (unsuccessful) execution, or that, inspired by the micro-trend of
"Ikea hacking," the official instructions are intentionally ignored.
Because such imprecision is supposed to be avoided, the most important
domain of algorithms in practice is mathematics and its implementation
on the computer. The term []{#Page_104 type="pagebreak"
title="104"}"algorithm" derives from the Persian mathematician,
astronomer, and geographer Muḥammad ibn Mūsā al-Khwārizmī. His book *On
the Calculation with Hindu Numerals*, which was written in Baghdad in
825, was known widely in the Western Middle Ages through a Latin
translation and made the essential contribution of introducing
Indo-Arabic numerals and the number zero to Europe. The work begins
with the formula *dixit algorizmi* ... ("Algorismi said ..."). During
the Middle Ages, *algorizmi* or *algorithmi* soon became a general term
for advanced methods of
calculation.[^78^](#c2-note-0078){#c2-note-0078a}
The modern effort to build machines that could mechanically carry out
instructions achieved its first breakthrough with Gottfried Wilhelm
Leibniz. He has often been credited with making the following remark:
"It is unworthy of excellent men to lose hours like slaves in the labour
of calculation which could be done by any peasant with the aid of a
machine."[^79^](#c2-note-0079){#c2-note-0079a} This vision already
contains a distinction between higher cognitive and interpretive
activities, which are regarded as being truly human, and lower processes
that involve pure execution and can therefore be mechanized. To this
end, Leibniz himself developed the first calculating machine, which
could carry out all four of the basic types of arithmetic. He was not
motivated to do this by the practical necessities of production and
business (although conceptually groundbreaking, Leibniz\'s calculating
machine remained, on account of its mechanical complexity, a unique item
and was never used).[^80^](#c2-note-0080){#c2-note-0080a} In the
estimation of the philosopher Sybille Krämer, calculating machines "were
rather speculative masterpieces of a century that, like none before it,
was infatuated by the idea of mechanizing 'intellectual'
processes."[^81^](#c2-note-0081){#c2-note-0081a} Long before machines
were implemented on a large scale to increase the efficiency of material
production, Leibniz had already speculated about using them to enhance
intellectual labor. And this vision has never since disappeared. Around
a century and a half later, the English polymath Charles Babbage
formulated it anew, now in direct connection with industrial
mechanization and its imperative of time-saving
efficiency.[^82^](#c2-note-0082){#c2-note-0082a} Yet he, too, failed to
overcome the problem of practically realizing such a machine.
The decisive step that turned the vision of calculating machines into
reality was made by Alan Turing in 1937. With []{#Page_105
type="pagebreak" title="105"}a theoretical model, he demonstrated that
every algorithm could be executed by a machine as long as it could read
an incremental set of signs, manipulate them according to established
rules, and then write them out again. The validity of his model did not
depend on whether the machine would be analog or digital, mechanical or
electronic, for the rules of manipulation were not at first conceived as
being a fixed component of the machine itself (that is, as being
implemented in its hardware). The electronic and digital approach came
to be preferred because it was hoped that even the instructions could be
read by the machine itself, so that the machine would be able to execute
not only one but (theoretically) every written algorithm. The
Hungarian-born mathematician John von Neumann made it his goal to
implement this idea. In 1945, he published a model in which the program
(the algorithm) and the data (the input and output) were housed in a
common storage device. Thus, both could be manipulated simultaneously
without having to change the hardware. In this way, he converted the
"Turing machine" into the "universal Turing machine"; that is, the
modern computer.[^83^](#c2-note-0083){#c2-note-0083a}
Gordon Moore, the co-founder of the chip manufacturer Intel,
prognosticated 20 years later that the complexity of integrated circuits
and thus the processing power of computer chips would double every 18 to
24 months. Since the 1970s, his prediction has been known as Moore\'s
Law and has essentially been correct. This technical development has
indeed taken place exponentially, not least because the semi-conductor
industry has been oriented around
it.[^84^](#c2-note-0084){#c2-note-0084a} An IBM 360/40 mainframe
computer, which was one of the first of its kind to be produced on a
large scale, could make approximately 40,000 calculations per second and
its cost, when it was introduced to the market in 1965, was \$1.5
million per unit. Just 40 years later, a standard server (with a
quad-core Intel processor) could make more than 40 billion calculations
per second, and this at a price of little more than \$1,500. This
amounts to an increase in performance by a factor of a million and a
corresponding price reduction by a factor of a thousand; that is, an
improvement in the price-to-performance ratio by a factor of a billion.
With inflation taken into consideration, this factor would be even
higher. No less dramatic were the increases in performance -- or rather
[]{#Page_106 type="pagebreak" title="106"}the price reductions -- in the
area of data storage. In 1980, it cost more than \$400,000 to store a
gigabyte of data, whereas 30 years later it would cost just 10 cents to
do the same -- a price reduction by a factor of 4 million. And in both
areas, this development has continued without pause.
These increases in performance have formed the material basis for the
rapidly growing number of activities carried out by means of algorithms.
We have now reached a point where Leibniz\'s distinction between
creative mental functions and "simple calculations" is becoming
increasingly fuzzy. Recent discussions about the allegedly threatening
"domination of the computer" have been kindled less by the increased use
of algorithms as such than by the gradual blurring of this distinction
with new possibilities to formalize and mechanize increasing areas of
creative thinking.[^85^](#c2-note-0085){#c2-note-0085a} Activities that
not long ago were reserved for human intelligence, such as composing
texts or analyzing the content of images, are now frequently done by
machines. As early as 2010, a program called Stats Monkey was introduced
to produce short reports about baseball games. All that the program
needs for this is comprehensive data about the games, which can be
accumulated mechanically and which have since become more detailed due
to improved image recognition and sensors. From these data, the program
extracts the decisive moments and players of a game, recognizes
characteristic patterns throughout the course of play (such as
"extending an early lead," "a dramatic comeback," etc.), and on this
basis generates its own report. Regarding the reports themselves, a
number of variables can be determined in advance, for instance whether
the story should be written from the perspective of a neutral observer
or from the standpoint of one of the two teams. If writing about little
league games, the program can be instructed to ignore the errors made by
children -- because no parent wants to read about those -- and simply
focus on their heroics. The algorithm was soon patented, and a start-up
business was created from the original interdisciplinary research
project: Narrative Science. In addition to sport reports it now offers
texts of all sorts, but above all financial reports -- another field for
which there is a great deal of available data. These texts have been
published by reputable media outlets such as the business magazine
*Forbes*, in which their authorship []{#Page_107 type="pagebreak"
title="107"}is credited to "Narrative Science." Although these
contributions are still limited to relatively simple topics, this will
not remain the case for long. When asked about the percentage of news
that would be written by computers 15 years from now, Narrative
Science\'s chief technology officer and co-founder Kristian Hammond
confidently predicted "\[m\]ore than 90 percent." He added that, within
the next five years, an algorithm could even win a Pulitzer
Prize.[^86^](#c2-note-0086){#c2-note-0086a} This may be blatant hype and
self-promotion but, as a general estimation, Hammond\'s assertion is not
entirely beyond belief. It remains to be seen whether algorithms will
replace or simply supplement traditional journalism. Yet because media
companies are now under strong financial pressure, it is certainly
reasonable to predict that many journalistic texts will be automated in
the future. Entirely different applications, however, have also been
conceived. Alexander Pschera, for instance, foresees a new age in the
relationship between humans and nature, for, as soon as animals are
equipped with transmitters and sensors and are thus able to tell their
own stories through the appropriate software, they will be regarded as
individuals and not merely as generic members of a
species.[^87^](#c2-note-0087){#c2-note-0087a}
We have not yet reached this point. However, given that the CIA has also
expressed interest in Narrative Science and has invested in it through
its venture-capital firm In-Q-Tel, there are indications that
applications are being developed beyond the field of journalism. For the
purpose of spreading propaganda, for instance, algorithms can easily be
used to create a flood of entries on online forums and social mass
media.[^88^](#c2-note-0088){#c2-note-0088a} Narrative Science is only
one of many companies offering automated text analysis and production.
As implemented by IBM and other firms, so-called E-discovery software
promises to reduce dramatically the amount of time and effort required
to analyze the constantly growing numbers of files that are relevant to
complex legal cases. Without such software, it would be impossible in
practice for lawyers to deal with so many documents. Numerous bots
(automated editing programs) are active in the production of Wikipedia
as well. Whereas, in the German edition, bots are forbidden from writing
their own articles, this is not the case in the Swedish version.
Measured by the number of entries, the latter is now the second-largest
edition of the online encyclopedia in the []{#Page_108 type="pagebreak"
title="108"}world, for, in the summer of 2013, a single bot contributed
more than 200,000 articles to it.[^89^](#c2-note-0089){#c2-note-0089a}
Since 2013, moreover, the company Epagogix has offered software that
uses historical data to evaluate the market potential of film scripts.
At least one major Hollywood studio uses this software behind the backs
of scriptwriters and directors, for, according to the company\'s CEO,
the latter would be "nervous" to learn that their creative work was
being analyzed in such a way.[^90^](#c2-note-0090){#c2-note-0090a}
Think, too, of the typical statement that is made at the beginning of a
call to a telephone hotline -- "This call may be recorded for training
purposes." Increasingly, this training is not intended for the employees
of the call center but rather for algorithms. The latter are expected to
learn how to recognize the personality type of the caller and, on that
basis, to produce an appropriate script to be read by its poorly
educated and part-time human
co-workers.[^91^](#c2-note-0091){#c2-note-0091a} Another example is the
use of algorithms to grade student
essays,[^92^](#c2-note-0092){#c2-note-0092a} or ... But there is no need
to expand this list any further. Even without additional references to
comparable developments in the fields of image, sound, language, and
film analysis, it is clear by now that, on many fronts, the borders
between the creative and the mechanical have
shifted.[^93^](#c2-note-0093){#c2-note-0093a}
:::
The algorithms used for such tasks, however, are no longer simple
sequences of static instructions. They are no longer repeated unchanged,
over and over again, but are dynamic and adaptive to a high degree. The
computing power available today is used to write programs that modify
and improve themselves semi-automatically and in response to feedback.
What this means can be illustrated by the example of evolutionary and
self-learning algorithms. An evolutionary algorithm is developed in an
iterative process that continues to run until the desired result has
been achieved. In most cases, the values of the variables of the first
generation of algorithms are chosen at random in order to diminish the
influence of the programmer\'s presuppositions on the results. These
cannot be avoided entirely, however, because the type of variables
(independent of their value) has to be determined in the first place. I
will return to this problem later on. This is []{#Page_109
type="pagebreak" title="109"}followed by a phase of evaluation: the
output of every tested algorithm is evaluated according to how close it
is to the desired solution. The best are then chosen and combined with
one another. In addition, mutations (that is, random changes) are
introduced. These steps are then repeated as often as necessary until,
according to the specifications in question, the algorithm is
"sufficient" or cannot be improved any further. By means of intensive
computational processes, algorithms are thus "cultivated"; that is,
large numbers of these are tested instead of a single one being designed
analytically and then implemented. At the heart of this pursuit is a
functional solution that proves itself experimentally and in practice,
but about which it might no longer be possible to know why it functions
or whether it actually is the best possible solution. The fundamental
methods behind this process largely derive from the 1970s (the first
stage of artificial intelligence), the difference being that today they
can be carried out far more effectively. One of the best-known examples
of an evolutionary algorithm is that of Google Flu Trends. In order to
predict which regions will be especially struck by the flu in a given
year, it evaluates the geographic distribution of internet searches for
particular terms ("cold remedies," for instance). To develop the
program, Google tested 450 million different models until one emerged
that could reliably identify local flu epidemics one to two weeks ahead
of the national health authorities.[^94^](#c2-note-0094){#c2-note-0094a}
In pursuits of this magnitude, the necessary processes can only be
administered by computer programs. The series of tests are no longer
conducted by programmers but rather by algorithms. In short, algorithms
are implemented in order to write new algorithms or determine their
variables. If this reflexive process, in turn, is built into an
algorithm, then the latter becomes "self-learning": the programmers do
not set the rules for its execution but rather the rules according to
which the algorithm is supposed to know how to accomplish a particular
goal. In many cases, the solution strategies are so complex that they
are incomprehensible in retrospect. They can no longer be tested
logically, only experimentally. Such algorithms are essentially black
boxes -- objects that can only be understood by their outer behavior but
whose internal structure cannot be known.[]{#Page_110 type="pagebreak"
title="110"}
Automatic facial recognition, as used in surveillance technologies and
for authorizing access to certain things, is based on the fact that
computers can evaluate large numbers of facial images, first to produce
a general model for a face, then to identify the variables that make a
face unique and therefore recognizable. With so-called "unsupervised" or
"deep-learning" algorithms, some developers and companies have even
taken this a step further: computers are expected to extract faces from
unstructured images -- that is, from volumes of images that contain
images both with faces and without them -- and to do so without
possessing in advance any model of the face in question. So far, the
extraction and evaluation of unknown patterns from unstructured material
has only been achieved in the case of very simple patterns -- with edges
or surfaces in images, for instance -- for it is extremely complex and
computationally intensive to program such learning processes. In recent
years, however, there have been enormous leaps in available computing
power, and both the data inputs and the complexity of the learning
models have increased exponentially. Today, on the basis of simple
patterns, algorithms are developing improved recognition of the complex
content of images. They are refining themselves on their own. The term
"deep learning" is meant to denote this very complexity. In 2012, Google
was able to demonstrate the performance capacity of its new programs in
an impressive manner: from a collection of randomly chosen YouTube
videos, analyzed in a cluster by 1,000 computers with 16,000 processors,
it was possible to create a model in just three days that increased
facial recognition in unstructured images by 70
percent.[^95^](#c2-note-0095){#c2-note-0095a} Of course, the algorithm
does not "know" what a face is, but it reliably recognizes a class of
forms that humans refer to as a face. One advantage of a model that is
not created on the basis of prescribed parameters is that it can also
identify faces in non-standard situations (for instance if a person is
in the background, if a face is half-concealed, or if it has been
recorded at a sharp angle). Thanks to this technique, it is possible to
search the content of images directly and not, as before, primarily by
searching their descriptions. Such algorithms are also being used to
identify people in images and to connect them in social networks with
the profiles of the people in question, and this []{#Page_111
type="pagebreak" title="111"}without any cooperation from the users
themselves. Such algorithms are also expected to assist in directly
controlling activity in "unstructured" reality, for instance in
self-driving cars or other autonomous mobile applications that are of
great interest to the military in particular.
Algorithms of this sort can react and adjust themselves directly to
changes in the environment. This feedback, however, also shortens the
timeframe within which they are able to generate repetitive and
therefore predictable results. Thus, algorithms and their predictive
powers can themselves become unpredictable. Stock markets have
frequently experienced so-called "sub-second extreme events"; that is,
price fluctuations that happen in less than a
second.[^96^](#c2-note-0096){#c2-note-0096a} Dramatic "flash crashes,"
however, such as that which occurred on May 6, 2010, when the Dow Jones
Index dropped almost a thousand points in a few minutes (and was thus
perceptible to humans), have not been terribly
uncommon.[^97^](#c2-note-0097){#c2-note-0097a} With the introduction of
voice commands on mobile phones (Apple\'s Siri, for example, which came
out in 2011), programs based on self-learning algorithms have now
reached the public at large and have infiltrated increased areas of
everyday life.
:::
Orders generated by algorithms are a constitutive element of the digital
condition. On the one hand, the mechanical pre-sorting of the
(informational) world is a precondition for managing immense and
unstructured amounts of data. On the other hand, these large amounts of
data and the computing centers in which they are stored and processed
provide the material precondition for developing increasingly complex
algorithms. Necessities and possibilities are mutually motivating one
another.[^98^](#c2-note-0098){#c2-note-0098a}
Perhaps the best-known algorithms that sort the digital infosphere and
make it usable in its present form are those of search engines, above
all Google\'s PageRank. Thanks to these, we can find our way around in a
world of unstructured information and transfer increasingly larger parts
of the (informational) world into the order of unstructuredness without
giving rise to the "Library of Babel." Here, "unstructured" means that
there is no prescribed order such as (to stick []{#Page_112
type="pagebreak" title="112"}with the image of the library) a cataloging
system that assigns to each book a specific place on a shelf. Rather,
the books are spread all over the place and are dynamically arranged,
each according to a search, so that the appropriate books for each
visitor are always standing ready at the entrance. Yet the metaphor of
books being strewn all about is problematic, for "unstructuredness" does
not simply mean the absence of any structure but rather the presence of
another type of order -- a meta-structure, a potential for order -- out
of which innumerable specific arrangements can be generated on an ad hoc
basis. This meta-structure is created by algorithms. They subsequently
derive from it an actual order, which the user encounters, for instance,
when he or she scrolls through a list of hits produced by a search
engine. What the user does not see are the complex preconditions for
assembling the search results. By the middle of 2014, according to the
company\'s own information, the Google index alone included more than a
hundred million gigabytes of data.
Originally (that is, in the second half of the 1990s), PageRank
functioned in such a way that the algorithm analyzed the structure of
links on the World Wide Web, first by noting the number of links that
referred to a given document, and second by evaluating the "relevance"
of the site that linked to the document in question. The relevance of a
site, in turn, was determined by the number of links that led to it.
From these two variables, every document registered by the search engine
was assigned a value, the PageRank. The latter served to present the
documents found with a given search term as a hierarchical list (search
results), whereby the document with the highest value was listed
first.[^99^](#c2-note-0099){#c2-note-0099a} This algorithm was extremely
successful because it reduced the unfathomable chaos of the World Wide
Web to a task that could be managed without difficulty by an individual
user: inputting a search term and selecting from one of the presented
"hits." The simplicity of the user\'s final choice, together with the
quality of the algorithmic pre-selection, quickly pushed Google past its
competition.
Underlying this process is the assumption that every link is an
indication of relevance, and that links from frequently linked (that is,
popular) sources are more important than those from less frequently
linked (that is, unpopular) sources. []{#Page_113 type="pagebreak"
title="113"}The advantage of this assumption is that it can be
understood in terms of purely quantitative variables and it is not
necessary to have any direct understanding of a document\'s content or
of the context in which it exists.
In the middle of the 1990s, when the first version of the PageRank
algorithm was developed, the problem of judging the relevance of
documents whose content could only partially be evaluated was not a new
one. Science administrators at universities and funding agencies had
been facing this difficulty since the 1950s. During the rise of the
knowledge economy, the number of scientific publications increased
rapidly. Scientific fields, perspectives, and methods also multiplied
and diversified during this time, so that even experts could not survey
all of the work being done in their own areas of
research.[^100^](#c2-note-0100){#c2-note-0100a} Thus, instead of reading
and evaluating the content of countless new publications, they shifted
their analysis to a higher level of abstraction. They began to count how
often an article or book was cited and applied this information to
assess the value of a given author or
publication.[^101^](#c2-note-0101){#c2-note-0101a} The underlying
assumption was (and remains) that only important things are referenced,
and therefore every citation and every reference can be regarded as an
indirect vote for something\'s relevance.
In both cases -- classifying a chaotic sphere of information and
administering an expanding industry of knowledge -- the challenge is to
develop dynamic orders for rapidly changing fields, enabling the
evaluation of the importance of individual documents without knowledge
of their content. Because the analysis of citations or links operates on
a purely quantitative basis, large amounts of data can be quickly
structured with them, and especially relevant positions can be
determined. The second advantage of this approach is that it does not
require any assumptions about the contours of different fields or their
relationships to one another. This enables the organization of
disordered or dynamic content. In both cases, references made by the
actors themselves are used: citations in a scientific text, links on
websites. Their value for establishing the order of a field as a whole,
however, is only visible in the aggregate, for instance in the frequency
with which a given article is
cited.[^102^](#c2-note-0102){#c2-note-0102a} In both cases, the shift
from analyzing "data" (the content of documents in the traditional
sense) to []{#Page_114 type="pagebreak" title="114"}analyzing
"meta-data" (describing documents in light of their relationships to one
another) is a precondition for being able to make any use at all of
growing amounts of information.[^103^](#c2-note-0103){#c2-note-0103a}
This shift introduced a new level of abstraction. Information is no
longer understood as a representation of external reality; its
significance is not evaluated with regard to the relation between
"information" and "the world," for instance with a qualitative criterion
such as "true"/"false." Rather, the sphere of information is treated as
a self-referential, closed world, and documents are accordingly only
evaluated in terms of their position within this world, though with
quantitative criteria such as "central"/"peripheral."
Even though the PageRank algorithm was highly effective and assisted
Google\'s rapid ascent to a market-leading position, at the beginning it
was still relatively simple and its mode of operation was at least
partially transparent. It followed the classical statistical model of an
algorithm. A document or site referred to by many links was considered
more important than one to which fewer links
referred.[^104^](#c2-note-0104){#c2-note-0104a} The algorithm analyzed
the given structural order of information and determined the position of
every document therein, and this was largely done independently of the
context of the search and without making any assumptions about it. This
approach functioned relatively well as long as the volume of information
did not exceed a certain size, and as long as the users and their
searches were somewhat similar to one another. In both respects, this is
no longer the case. The amount of information to be pre-sorted is
increasing, and users are searching in all possible situations and
places for everything under the sun. At the time Google was founded, no
one would have thought to check the internet, quickly and while on
one\'s way, for today\'s menu at the restaurant round the corner. Now,
thanks to smartphones, this is an obvious thing to do.
:::
In order to react to such changes in user behavior -- and simultaneously
to advance it further -- Google\'s search algorithm is constantly being
modified. It has become increasingly complex and has assimilated a
greater amount of contextual []{#Page_115 type="pagebreak"
title="115"}information, which influences the value of a site within
PageRank and thus the order of search results. The algorithm is no
longer a fixed object or unchanging recipe but is transforming into a
dynamic process, an opaque cloud composed of multiple interacting
algorithms that are continuously refined (between 500 and 600 times a
year, according to some estimates). These ongoing developments are so
extensive that, since 2003, several new versions of the algorithm cloud
have appeared each year with their own names. In 2014 alone, Google
carried out 13 large updates, more than ever
before.[^105^](#c2-note-0105){#c2-note-0105a}
These changes continue to bring about new levels of abstraction, so that
the algorithm takes into account additional variables such as the time
and place of a search, alongside a person\'s previously recorded
behavior -- but also his or her involvement in social environments, and
much more. Personalization and contextualization were made part of
Google\'s search algorithm in 2005. At first it was possible to choose
whether or not to use these. Since 2009, however, they have been a fixed
and binding component for everyone who conducts a search through
Google.[^106^](#c2-note-0106){#c2-note-0106a} By the middle of 2013, the
search algorithm had grown to include at least 200
variables.[^107^](#c2-note-0107){#c2-note-0107a} What is relevant is
that the algorithm no longer determines the position of a document
within a dynamic informational world that exists for everyone
externally. Instead, it now assigns a rank to their content within a
dynamic and singular universe of information that is tailored to every
individual user. For every person, an entirely different order is
created instead of just an excerpt from a previously existing order. The
world is no longer being represented; it is generated uniquely for every
user and then presented. Google is not the only company that has gone
down this path. Orders produced by algorithms have become increasingly
oriented toward creating, for each user, his or her own singular world.
Facebook, dating services, and other social mass media have been
pursuing this approach even more radically than Google.
:::
::: {.section}
### From the data shadow to the synthetic profile {#c2-sec-0024}
This form of generating the world requires not only detailed information
about the external world (that is, the reality []{#Page_116
type="pagebreak" title="116"}shared by everyone) but also information
about every individual\'s own relation to the
latter.[^108^](#c2-note-0108){#c2-note-0108a} To this end, profiles are
established for every user, and the more extensive they are, the better
they are for the algorithms. A profile created by Google, for instance,
identifies the user on three levels: as a "knowledgeable person" who is
informed about the world (this is established, for example, by recording
a person\'s searches, browsing behavior, etc.), as a "physical person"
who is located and mobile in the world (a component established, for
example, by tracking someone\'s location through a smartphone, sensors
in a smart home, or body signals), and as a "social person" who
interacts with other people (a facet that can be determined, for
instance, by following someone\'s activity on social mass
media).[^109^](#c2-note-0109){#c2-note-0109a}
Unlike the situation in the 1990s, however, these profiles are no longer
simply representations of singular people -- they are not "digital
personas" or "data shadows." They no longer represent what is
conventionally referred to as "individuality," in the sense of a
spatially and temporally uniform identity. On the one hand, profiles
rather consist of sub-individual elements -- of fragments of recorded
behavior that can be evaluated on the basis of a particular search
without promising to represent a person as a whole -- and they consist,
on the other hand, of clusters of multiple people, so that the person
being modeled can simultaneously occupy different positions in time.
This temporal differentiation enables predictions of the following sort
to be made: a person who has already done *x* will, with a probability
of *y*, go on to engage in activity *z*. It is in this way that Amazon
assembles its book recommendations, for the company knows that, within
the cluster of people that constitutes part of every person\'s profile,
a certain percentage of them have already gone through this sequence of
activity. Or, as the data-mining company Science Rockstars (!) once
pointedly expressed on its website, "Your next activity is a function of
the behavior of others and your own past."
Google and other providers of algorithmically generated orders have been
devoting increased resources to the prognostic capabilities of their
programs in order to make the confusing and potentially time-consuming
step of the search obsolete. The goal is to minimize a rift that comes
to light []{#Page_117 type="pagebreak" title="117"}in the act of
searching, namely that between the world as everyone experiences it --
plagued by uncertainty, for searching implies "not knowing something" --
and the world of algorithmically generated order, in which certainty
prevails, for everything has been well arranged in advance. Ideally,
questions should be answered before they are asked. The first attempt by
Google to eliminate this rift is called Google Now, and its slogan is
"The right information at just the right time." The program, which was
originally developed as an app but has since been made available on
Chrome, Google\'s own web browser, attempts to anticipate, on the basis
of existing data, a user\'s next step, and to provide the necessary
information before it is searched for in order that such steps take
place efficiently. Thus, for instance, it draws upon information from a
user\'s calendar in order to figure out where he or she will have to go
next. On the basis of real-time traffic data, it will then suggest the
optimal way to get there. For those driving cars, the amount of traffic
on the road will be part of the equation. This is ascertained by
analyzing the motion profiles of other drivers, which will allow the
program to determine whether the traffic is flowing or stuck in a jam.
If enough historical data is taken into account, the hope is that it
will be possible to redirect cars in such a way that traffic jams should
no longer occur.[^110^](#c2-note-0110){#c2-note-0110a} For those who use
public transport, Google Now evaluates real-time data about the
locations of various transport services. With this information, it will
suggest the optimal route and, depending on the calculated travel time,
it will send a reminder (sometimes earlier, sometimes later) when it is
time to go. That which Google is just experimenting with and testing in
a limited and unambiguous context is already part of Facebook\'s
everyday operations. With its EdgeRank algorithm, Facebook already
organizes everyone\'s newsfeed, entirely in the background and without
any explicit user interaction. On the basis of three variables -- user
affinity (previous interactions between two users), content weight (the
rate of interaction between all users and a specific piece of content),
and currency (the age of a post) -- the algorithm selects content from
the status updates made by one\'s friends to be displayed on one\'s own
page.[^111^](#c2-note-0111){#c2-note-0111a} In this way, Facebook
ensures that the stream of updates remains easy to scroll through, while
also -- it is safe []{#Page_118 type="pagebreak" title="118"}to assume
-- leaving enough room for advertising. This potential for manipulation,
which algorithms possess as they work away in the background, will be
the topic of my next section.
:::
::: {.section}
### Variables and correlations {#c2-sec-0025}
Every complex algorithm contains a multitude of variables and usually an
even greater number of ways to make connections between them. Every
variable and every relation, even if they are expressed in technical or
mathematical terms, codifies assumptions that express a specific
position in the world. There can be no purely descriptive variables,
just as there can be no such thing as "raw
data."[^112^](#c2-note-0112){#c2-note-0112a} Both -- data and variables
-- are always already "cooked"; that is, they are engendered through
cultural operations and formed within cultural
categories.[^113^](#c2-note-0113){#c2-note-0113a} With every use of
produced data and with every execution of an algorithm, the assumptions
embedded in them are activated, and the positions contained within them
have effects on the world that the algorithm generates and presents.
As already mentioned, the early version of the PageRank algorithm was
essentially based on the rather simple assumption that frequently linked
content is more relevant than content that is only seldom linked to, and
that links to sites that are themselves frequently linked to should be
given more weight than those found on sites with fewer links to them.
Replacing the qualitative criterion of "relevance" with the quantitative
criterion of "popularity" not only proved to be tremendously practical
but also extremely consequential, for search engines not only describe
the world; they create it as well. That which search engines put at the
top of this list is not just already popular but will remain so. A third
of all users click on the first search result, and around 95 percent do
not look past the first 10.[^114^](#c2-note-0114){#c2-note-0114a} Even
the earliest version of the PageRank algorithm did not represent
existing reality but rather (co-)constituted it.
Popularity, however, is not the only element with which algorithms
actively give shape to the user\'s world. A search engine can only sort,
weigh, and make available that portion of information which has already
been incorporated into its index. Everything else remains invisible. The
relation between []{#Page_119 type="pagebreak" title="119"}the recorded
part of the internet (the "surface web") and the unrecorded part (the
"deep web") is difficult to determine. Estimates have varied between
ratios of 1:5 and 1:500.[^115^](#c2-note-0115){#c2-note-0115a} There are
many reasons why content might be inaccessible to search engines.
Perhaps the information has been saved in formats that search engines
cannot read or can only poorly read, or perhaps it has been hidden
behind proprietary barriers such as paywalls. In order to expand the
realm of things that can be exploited by their algorithms, the operators
of search engines offer extensive guidance about how providers should
design their sites so that search tools can find them in an optimal
manner. It is not necessary to follow this guidance, but given the
central role of search engines in sorting and filtering information, it
is clear that they exercise a great deal of power by setting the
standards.[^116^](#c2-note-0116){#c2-note-0116a}
That the individual must "voluntarily" submit to this authority is
typical of the power of networks, which do not give instructions but
rather constitute preconditions. Yet it is in the interest of (almost)
every producer of information to optimize its position in a search
engine\'s index, and thus there is a strong incentive to accept the
preconditions in question. Considering, moreover, the nearly
monopolistic character of many providers of algorithmically generated
orders and the high price that one would have to pay if one\'s own site
were barely (or not at all) visible to others, the term "voluntary"
begins to take on a rather foul taste. This is a more or less subtle way
of pre-formatting the world so that it can be optimally recorded by
algorithms.[^117^](#c2-note-0117){#c2-note-0117a}
The providers of search engines usually justify such methods in the name
of offering "more efficient" services and "more relevant" results.
Ostensibly technical and neutral terms such as "efficiency" and
"relevance" do little, however, to conceal the political nature of
defining variables. Efficient with respect to what? Relevant for whom?
These are issues that are decided without much discussion by the
developers and institutions that regard the algorithms as their own
property. Every now and again such questions incite public debates,
mostly when the interests of one provider happen to collide with those
of its competition. Thus, for instance, the initiative known as
FairSearch has argued that Google abuses its market power as a search
engine to privilege its []{#Page_120 type="pagebreak" title="120"}own
content and thus to showcase it prominently in search
results.[^118^](#c2-note-0118){#c2-note-0118a} FairSearch\'s
representatives alleged, for example, that Google favors its own map
service in the case of address searches and its own price comparison
service in the case of product searches. The argument had an effect. In
November of 2010, the European Commission initiated an antitrust
investigation against Google. In 2014, a settlement was proposed that
would have required the American internet giant to pay certain
concessions, but the members of the Commission, the EU Parliament, and
consumer protection agencies were not satisfied with the agreement. In
April 2015, the anti-trust proceedings were recommenced by a newly
appointed Commission, its reasoning being that "Google does not apply to
its own comparison shopping service the system of penalties which it
applies to other comparison shopping services on the basis of defined
parameters, and which can lead to the lowering of the rank in which they
appear in Google\'s general search results
pages."[^119^](#c2-note-0119){#c2-note-0119a} In other words, the
Commission accused the company of manipulating search results to its own
advantage and the disadvantage of users.
This is not the only instance in which the political side of search
algorithms has come under public scrutiny. In the summer of 2012, Google
announced that sites with higher numbers of copyright removal notices
would henceforth appear lower in its
rankings.[^120^](#c2-note-0120){#c2-note-0120a} The company thus
introduced explicitly political and economic criteria in order to
influence what, according to the standards of certain powerful players
(such as film studios), users were able to
view.[^121^](#c2-note-0121){#c2-note-0121a} In this case, too, it would
be possible to speak of the personalization of searching, except that
the heart of the situation was not the natural person of the user but
rather the juridical person of the copyright holder. It was according to
the latter\'s interests and preferences that searching was being
reoriented. Amazon has employed similar tactics. In 2014, the online
merchant changed its celebrated recommendation algorithm with the goal
of reducing the presence of books released by irritating publishers that
dared to enter into price negotiations with the
company.[^122^](#c2-note-0122){#c2-note-0122a}
Controversies over the methods of Amazon or Google, however, are the
exception rather than the rule. Necessary (but never neutral) decisions
about recording and evaluating data []{#Page_121 type="pagebreak"
title="121"}with algorithms are being made almost all the time without
any discussion whatsoever. The logic of the original PageRank algorithm
was criticized as early as the year 2000 for essentially representing
the commercial logic of mass media, systematically disadvantaging
less-popular though perhaps otherwise relevant information, and thus
undermining the "substantive vision of the web as an inclusive
democratic space."[^123^](#c2-note-0123){#c2-note-0123a} The changes to
the search algorithm that have been adopted since then may have modified
this tendency, but they have certainly not weakened it. In addition to
concentrating on what is popular, the new variables privilege recently
uploaded and constantly updated content. The selection of search results
is now contingent upon the location of the user, and it takes into
account his or her social networking. It is oriented toward the average
of a dynamically modeled group. In other words, Google\'s new algorithm
favors that which is gaining popularity within a user\'s social network.
The global village is thus becoming more and more
provincial.[^124^](#c2-note-0124){#c2-note-0124a}
:::
::: {.section}
### Data behaviorism {#c2-sec-0026}
Algorithms such as Google\'s thus reiterate and reinforce a tendency
that has already been apparent on both the level of individual users and
that of communal formations: in order to deal with the vast amounts and
complexity of information, they direct their gaze inward, which is not
to say toward the inner being of individual people. As a level of
reference, the individual person -- with an interior world and with
ideas, dreams, and wishes -- is irrelevant. For algorithms, people are
black boxes that can only be understood in terms of their reactions to
stimuli. Consciousness, perception, and intention do not play any role
for them. In this regard, the legal philosopher Antoinette Rouvroy has
written about "data behaviorism."[^125^](#c2-note-0125){#c2-note-0125a}
With this, she is referring to the gradual return of a long-discredited
approach to behavioral psychology that postulated that human behavior
could be explained, predicted, and controlled purely by our outwardly
observable and measurable actions.[^126^](#c2-note-0126){#c2-note-0126a}
Psychological dimensions were ignored (and are ignored in this new
version of behaviorism) because it is difficult to observe them
empirically. Accordingly, this approach also did away with the need
[]{#Page_122 type="pagebreak" title="122"}to question people directly or
take into account their subjective experiences, thoughts, and feelings.
People were regarded (and are so again today) as unreliable, as poor
judges of themselves, and as only partly honest when disclosing
information. Any strictly empirical science, or so the thinking went,
required its practitioners to disregard everything that did not result
in physical and observable action. From this perspective, it was
possible to break down even complex behavior into units of stimulus and
reaction. This led to the conviction that someone observing another\'s
activity always knows more than the latter does about himself or herself
for, unlike the person being observed, whose impressions can be
inaccurate, the observer is in command of objective and complete
information. Even early on, this approach faced a wave of critique. It
was held to be mechanistic, reductionist, and authoritarian because it
privileged the observing scientist over the subject. In practice, it
quickly ran into its own limitations: it was simply too expensive and
complicated to gather data about human behavior.
Yet that has changed radically in recent years. It is now possible to
measure ever more activities, conditions, and contexts empirically.
Algorithms like Google\'s or Amazon\'s form the technical backdrop for
the revival of a mechanistic, reductionist, and authoritarian approach
that has resurrected the long-lost dream of an objective view -- the
view from nowhere.[^127^](#c2-note-0127){#c2-note-0127a} Every critique
of this positivistic perspective -- that every measurement result, for
instance, reflects not only the measured but also the measurer -- is
brushed aside with reference to the sheer amounts of data that are now
at our disposal.[^128^](#c2-note-0128){#c2-note-0128a} This attitude
substantiates the claim of those in possession of these new and
comprehensive powers of observation (which, in addition to Google and
Facebook, also includes the intelligence services of Western nations),
namely that they know more about individuals than individuals know about
themselves, and are thus able to answer our questions before we ask
them. As mentioned above, this is a goal that Google expressly hopes to
achieve.
At issue with this "inward turn" is thus the space of communal
formations, which is constituted by the sum of all of the activities of
their interacting participants. In this case, however, a communal
formation is not consciously created []{#Page_123 type="pagebreak"
title="123"}and maintained in a horizontal process, but rather
synthetically constructed as a computational function. Depending on the
context and the need, individuals can either be assigned to this
function or removed from it. All of this happens behind the user\'s back
and in accordance with the goals and positions that are relevant to the
developers of a given algorithm, be it to optimize profit or
surveillance, create social norms, improve services, or whatever else.
The results generated in this way are sold to users as a personalized
and efficient service that provides a quasi-magical product. Out of the
enormous haystack of searchable information, results are generated that
are made to seem like the very needle that we have been looking for. At
best, it is only partially transparent how these results came about and
which positions in the world are strengthened or weakened by them. Yet,
as long as the needle is somewhat functional, most users are content,
and the algorithm registers this contentedness to validate itself. In
this dynamic world of unmanageable complexity, users are guided by a
sort of radical, short-term pragmatism. They are happy to have the world
pre-sorted for them in order to improve their activity in it. Regarding
the matter of whether the information being provided represents the
world accurately or not, they are unable to formulate an adequate
assessment for themselves, for it is ultimately impossible to answer
this question without certain resources. Outside of rapidly shrinking
domains of specialized or everyday knowledge, it is becoming
increasingly difficult to gain an overview of the world without
mechanisms that pre-sort it. Users are only able to evaluate search
results pragmatically; that is, in light of whether or not they are
helpful in solving a concrete problem. In this regard, it is not
paramount that they find the best solution or the correct answer but
rather one that is available and sufficient. This reality lends an
enormous amount of influence to the institutions and processes that
provide the solutions and answers.[]{#Page_124 type="pagebreak"
title="124"}
:::
:::
::: {.section .notesList}
[1](#c2-note-0001a){#c2-note-0001} André Rottmann, "Reflexive Systems
of Reference: Approximations to 'Referentialism' in Contemporary Art,"
trans. Gerrit Jackson, in Dirk Snauwaert et al. (eds), *Rehabilitation:
The Legacy of the Modern Movement* (Ghent: MER, 2010), pp. 97--106, at
99.
[2](#c2-note-0002a){#c2-note-0002} The recognizability of the sources
distinguishes these processes from plagiarism. The latter operates with
the complete opposite aim, namely that of borrowing sources without
acknowledging them.
[4](#c2-note-0004a){#c2-note-0004} Theodor W. Adorno, *Aesthetic
Theory*, trans. Robert Hullot-Kentor (Minneapolis, MN: University of
Minnesota Press, 1997), p. 151.
[5](#c2-note-0005a){#c2-note-0005} Peter Bürger, *Theory of the
Avant-Garde*, trans. Michael Shaw (Minneapolis, MN: University of
Minnesota Press, 1984).
[6](#c2-note-0006a){#c2-note-0006} Felix Stalder, "Neun Thesen zur
Remix-Kultur," *i-rights.info* (May 25, 2009), online.
[7](#c2-note-0007a){#c2-note-0007} Florian Cramer, *Exe.cut(up)able
Statements: Poetische Kalküle und Phantasmen des selbstausführenden
Texts* (Munich: Wilhelm Fink, 2011), pp. 9--10 \[--trans.\]
[8](#c2-note-0008a){#c2-note-0008} McLuhan stressed that, despite using
the alphabet, every manuscript is unique because it not only depended on
the sequence of letters but also on the individual ability of a given
scribe to []{#Page_185 type="pagebreak" title="185"}lend these letters a
particular shape. With the rise of the printing press, the alphabet shed
these last elements of calligraphy and became typography.
[9](#c2-note-0009a){#c2-note-0009} Elisabeth L. Eisenstein, *The
Printing Revolution in Early Modern Europe* (Cambridge: Cambridge
University Press, 1983), p. 15.
[10](#c2-note-0010a){#c2-note-0010} Eisenstein, *The Printing
Revolution in Early Modern Europe*, p. 204.
[11](#c2-note-0011a){#c2-note-0011} The fundamental aspects of these
conventions were formulated as early as the beginning of the sixteenth
century; see Michael Giesecke, *Der Buchdruck in der frühen Neuzeit:
Eine historische Fallstudie über die Durchsetzung neuer Informations-
und Kommunikationstechnologien* (Frankfurt am Main: Suhrkamp, 1991), pp.
420--40.
[12](#c2-note-0012a){#c2-note-0012} Eisenstein, *The Printing
Revolution in Early Modern Europe*, p. 49.
[13](#c2-note-0013a){#c2-note-0013} In April 2014, the Authors Guild --
the association of American writers that had sued Google -- filed an
appeal to overturn the decision and made a public statement demanding
that a new organization be established to license the digital rights of
out-of-print books. See "Authors Guild: Amazon was Google's Target,"
*The Authors Guild: Industry & Advocacy News* (April 11, 2014), online.
In October 2015, however, the next-highest authority -- the United
States Court of Appeals for the Second Circuit -- likewise decided in
Google\'s favor. The Authors Guild promptly announced its intention to
take the case to the Supreme Court.
[14](#c2-note-0014a){#c2-note-0014} Jean-Noël Jeanneney, *Google and
the Myth of Universal Knowledge: A View from Europe*, trans. Teresa
Lavender Fagan (Chicago, IL: University of Chicago Press, 2007).
[15](#c2-note-0015a){#c2-note-0015} Within the framework of the Images
for the Future project (2007--14), the Netherlands alone invested more
than €170 million to digitize the collections of the most important
audiovisual archives. Over 10 years, the cost of digitizing the entire
cultural heritage of Europe has been estimated to be around €100
billion. See Nick Poole, *The Cost of Digitising Europe\'s Cultural
Heritage: A Report for the Comité des Sages of the European Commission*
(November 2010), online.
[16](#c2-note-0016a){#c2-note-0016} Richard Darnton, "The National
Digital Public Library Is Launched!", *New York Review of Books* (April
25, 2013), online.
[17](#c2-note-0017a){#c2-note-0017} According to estimates by the
British Library, so-called "orphan works" alone -- that is, works still
legally protected but whose right holders are unknown -- make up around
40 percent of the books in its collection that still fall under
copyright law. In an effort to alleviate this problem, the European
Parliament and the European Commission issued a directive []{#Page_186
type="pagebreak" title="186"}in 2012 concerned with "certain permitted
uses of orphan works." This has allowed libraries and archives to make
works available online without permission if, "after carrying out
diligent searches," the copyright holders cannot be found. What
qualifies as a "diligent search," however, is so strictly formulated
that the German Library Association has called the directive
"impracticable." Deutscher Bibliotheksverband, "Rechtlinie über
bestimmte zulässige Formen der Nutzung verwaister Werke" (February 27,
2012), online.
[19](#c2-note-0019a){#c2-note-0019} The numbers in this area of
activity are notoriously unreliable, and therefore only rough estimates
are possible. It seems credible, however, that the Pirate Bay was
attracting around a billion page views per month by the end of 2013.
That would make it the seventy-fourth most popular internet destination.
See Ernesto, "Top 10 Most Popular Torrent Sites of 2014" (January 4,
2014), online.
[20](#c2-note-0020a){#c2-note-0020} See the documentary film *TPB AFK:
The Pirate Bay Away from Keyboard* (2013), directed by Simon Klose.
[21](#c2-note-0021a){#c2-note-0021} In technical terms, there is hardly
any difference between a "stream" and a "download." In both cases, a
complete file is transferred to the user\'s computer and played.
[22](#c2-note-0022a){#c2-note-0022} The practice is legal in Germany
but illegal in Austria, though digitized texts are routinely made
available there in seminars. See Seyavash Amini Khanimani and Nikolaus
Forgó, "Rechtsgutachten über die Erforderlichkeit einer freien
Werknutzung im österreichischen Urheberrecht zur Privilegierung
elektronisch unterstützter Lehre," *Forum Neue Medien Austria* (January
2011), online.
[23](#c2-note-0023a){#c2-note-0023} Deutscher Bibliotheksverband,
"Digitalisierung" (2015), online \[--trans\].
[24](#c2-note-0024a){#c2-note-0024} David Weinberger, *Everything Is
Miscellaneous: The Power of the New Digital Disorder* (New York: Times
Books, 2007).
[25](#c2-note-0025a){#c2-note-0025} This is not a question of material
wealth. Those who are economically or socially marginalized are
confronted with the same phenomenon. Their primary experience of this
excess is with cheap goods and junk.
[26](#c2-note-0026a){#c2-note-0026} See Gregory Bateson, "Form,
Substance and Difference," in Bateson, *Steps to an Ecology of Mind:
Collected Essays in Anthropology, Psychiatry, Evolution and
Epistemology* (London: Jason Aronson, 1972), pp. 455--71, at 460:
"\[I\]n fact, what we mean by information -- the elementary unit of
information -- is *a difference which makes a difference*" (the emphasis
is original).
[27](#c2-note-0027a){#c2-note-0027} Inke Arns and Gabriele Horn,
*History Will Repeat Itself* (Frankfurt am Main: Revolver, 2007), p.
42.[]{#Page_187 type="pagebreak" title="187"}
[28](#c2-note-0028a){#c2-note-0028} See the film *The Battle of
Orgreave* (2001), directed by Mike Figgis.
[29](#c2-note-0029a){#c2-note-0029} Theresa Winge, "Costuming the
Imagination: Origins of Anime and Manga Cosplay," *Mechademia* 1 (2006),
pp. 65--76.
[30](#c2-note-0030a){#c2-note-0030} Nicolle Lamerichs, "Stranger than
Fiction: Fan Identity in Cosplay," *Transformative Works and Cultures* 7
(2011), online.
[31](#c2-note-0031a){#c2-note-0031} The *Oxford English Dictionary*
defines "selfie" as a "photographic self-portrait; *esp*. one taken with
a smartphone or webcam and shared via social media."
[32](#c2-note-0032a){#c2-note-0032} Odin Kroeger et al. (eds),
*Geistiges Eigentum und Originalität: Zur Politik der Wissens- und
Kulturproduktion* (Vienna: Turia + Kant, 2011).
[33](#c2-note-0033a){#c2-note-0033} Roland Barthes, "The Death of the
Author," in Barthes, *Image -- Music -- Text*, trans. Stephen Heath
(London: Fontana Press, 1977), pp. 142--8.
[34](#c2-note-0034a){#c2-note-0034} Heinz Rölleke and Albert
Schindehütte, *Es war einmal: Die wahren Märchen der Brüder Grimm und
wer sie ihnen erzählte* (Frankfurt am Main: Eichborn, 2011); and Heiner
Boehncke, *Marie Hassenpflug: Eine Märchenerzählerin der Brüder Grimm*
(Darmstadt: Von Zabern, 2013).
[35](#c2-note-0035a){#c2-note-0035} Hansjörg Ewert, "Alles nur
geklaut?", *Zeit Online* (February 26, 2013), online. This is not a new
realization but has long been a special area of research for
musicologists. What is new, however, is that it is no longer
controversial outside of this narrow disciplinary discourse. See Peter
J. Burkholder, "The Uses of Existing Music: Musical Borrowing as a
Field," *Notes* 50 (1994), pp. 851--70.
[36](#c2-note-0036a){#c2-note-0036} Zygmunt Bauman, *Liquid Modernity*
(Cambridge: Polity, 2000), p. 56.
[37](#c2-note-0037a){#c2-note-0037} Quoted from Eran Schaerf\'s audio
installation *FM-Scenario: Reality Race* (2013), online.
[38](#c2-note-0038a){#c2-note-0038} The number of members, for
instance, of the two large political parties in Germany, the Social
Democratic Party and the Christian Democratic Union, reached its peak at
the end of the 1970s or the beginning of the 1980s. Both were able to
increase their absolute numbers for a brief time at the beginning of the
1990s, when the Christian Democratic Party even reached its absolute
high point, but this can be explained by a surge in new members after
reunification. By 2010, both parties already had fewer members than
Greenpeace, whose 580,000 members make it Germany's largest NGO.
Parallel to this, between 1970 and 2010, the proportion of people
without any religious affiliations shrank to approximately 37 percent.
That there are more churches and political parties today is indicative
of how difficult []{#Page_188 type="pagebreak" title="188"}it has become
for any single organization to attract broad strata of society.
[39](#c2-note-0039a){#c2-note-0039} Ulrich Beck, *Risk Society: Towards
a New Modernity*, trans. Mark Ritter (London: SAGE, 1992), p. 135.
[40](#c2-note-0040a){#c2-note-0040} Ferdinand Tönnies, *Community and
Society*, trans. Charles P. Loomis (East Lansing: Michigan State
University Press, 1957).
[41](#c2-note-0041a){#c2-note-0041} Karl Marx and Friedrich Engels,
"The Manifesto of the Communist Party (1848)," trans. Terrell Carver, in
*The Cambridge Companion to the Communist Manifesto*, ed. Carver and
James Farr (Cambridge: Cambridge University Press, 2015), pp. 237--60,
at 239. For Marx and Engels, this was -- like everything pertaining to
the dynamics of capitalism -- a thoroughly ambivalent development. For,
in this case, it finally forced people "to take a down-to-earth view of
their circumstances, their multifarious relationships" (ibid.).
[42](#c2-note-0042a){#c2-note-0042} As early as the 1940s, Karl Polanyi
demonstrated in *The Great Transformation* (New York: Farrar & Rinehart,
1944) that the idea of strictly separated spheres, which are supposed to
be so typical of society, is in fact highly ideological. He argued above
all that the attempt to implement this separation fully and consistently
in the form of the free market would destroy the foundations of society
because both the life of workers and the environment of the market
itself would be regarded as externalities. For a recent adaptation of
this argument, see David Graeber, *Debt: The First 5000 Years* (New
York: Melville House, 2011).
[43](#c2-note-0043a){#c2-note-0043} Tönnies's persistent influence can
be felt, for instance, in Zygmunt Bauman's negative assessment of the
compunction to strive for community in his *Community: Seeking Safety in
an Insecure World* (Malden, MA: Blackwell, 2001).
[44](#c2-note-0044a){#c2-note-0044} See, for example, Amitai Etzioni,
*The Third Way to a Good Society* (London: Demos, 2000).
[45](#c2-note-0045a){#c2-note-0045} Jean Lave and Étienne Wenger,
*Situated Learning: Legitimate Peripheral Participation* (Cambridge:
Cambridge University Press, 1991), p. 98.
[46](#c2-note-0046a){#c2-note-0046} Étienne Wenger, *Cultivating
Communities of Practice: A Guide to Managing Knowledge* (Boston, MA:
Harvard Business School Press, 2000).
[47](#c2-note-0047a){#c2-note-0047} The institutions of the
disciplinary society -- schools, factories, prisons and hospitals, for
instance -- were closed. Whoever was inside could not get out.
Participation was obligatory, and instructions had to be followed. See
Michel Foucault, *Discipline and Punish: The Birth of the Prison*,
trans. Alan Sheridan (New York: Pantheon Books, 1977).[]{#Page_189
type="pagebreak" title="189"}
[48](#c2-note-0048a){#c2-note-0048} Weber famously defined power as
follows: "Power is the probability that one actor within a social
relationship will be in a position to carry out his own will despite
resistance, regardless of the basis on which this probability rests."
Max Weber, *Economy and Society: An Outline of Interpretive Sociology*,
trans. Guenther Roth and Claus Wittich (Berkeley, CA: University of
California Press, 1978), p. 53.
[49](#c2-note-0049a){#c2-note-0049} For those in complete despair, the
following tip is provided: "To get more likes, start liking the photos
of random people." Such a strategy, it seems, is more likely to increase
than decrease one's hopelessness. The quotations are from "How to Get
More Likes on Your Instagram Photos," *WikiHow* (2016), online.
[50](#c2-note-0050a){#c2-note-0050} Jeremy Gilbert, *Democracy and
Collectivity in an Age of Individualism* (London: Pluto Books, 2013).
[52](#c2-note-0052a){#c2-note-0052} Harrison Rainie and Barry Wellman,
*Networked: The New Social Operating System* (Cambridge, MA: MIT Press,
2012). The term is practical because it is easy to understand, but it is
also conceptually contradictory. An individual (an indivisible entity)
cannot be defined in terms of a distributed network. With a nod toward
Gilles Deleuze, the cumbersome but theoretically more precise term
"dividual" (the divisible) has also been used. See Gerald Raunig,
"Dividuen des Facebook: Das neue Begehren nach Selbstzerteilung," in
Oliver Leistert and Theo Röhle (eds), *Generation Facebook: Über das
Leben im Social Net* (Bielefeld: Transcript, 2011), pp. 145--59.
[53](#c2-note-0053a){#c2-note-0053} Jariu Saramäki et al., "Persistence
of Social Signatures in Human Communication," *Proceedings of the
National Academy of Sciences of the United States of America* 111
(2014): 942--7.
[54](#c2-note-0054a){#c2-note-0054} The term "weak ties" derives from a
study of where people find out information about new jobs. As the study
shows, this information does not usually come from close friends, whose
level of knowledge often does not differ much from that of the person
looking for a job, but rather from loose acquaintances, whose living
environments do not overlap much with one\'s own and who can therefore
make information available from outside of one\'s own network. See Mark
Granovetter, "The Strength of Weak Ties," *American Journal of
Sociology* 78 (1973): 1360--80.
[55](#c2-note-0055a){#c2-note-0055} Castells, *The Power of Identity*,
420.
[56](#c2-note-0056a){#c2-note-0056} Ulf Weigelt, "Darf der Chef
ständige Erreichbarkeit verlangen?" *Zeit Online* (June 13, 2012),
online \[--trans.\].[]{#Page_190 type="pagebreak" title="190"}
[57](#c2-note-0057a){#c2-note-0057} Hartmut Rosa, *Social Acceleration:
A New Theory of Modernity*, trans. Jonathan Trejo-Mathys (New York:
Columbia University Press, 2013).
[58](#c2-note-0058a){#c2-note-0058} This technique -- "social freezing"
-- has already become so standard that it is now regarded as way to help
women achieve a better balance between work and family life. See Kolja
Rudzio "Social Freezing: Ein Kind von Apple," *Zeit Online* (November 6,
2014), online.
[59](#c2-note-0059a){#c2-note-0059} See the film *Into Eternity*
(2009), directed by Michael Madsen.
[60](#c2-note-0060a){#c2-note-0060} Thomas S. Kuhn, *The Structure of
Scientific Revolutions*, 3rd edn (Chicago, IL: University of Chicago
Press, 1996).
[61](#c2-note-0061a){#c2-note-0061} Werner Busch and Peter Schmoock,
*Kunst: Die Geschichte ihrer Funktionen* (Weinheim: Quadriga/Beltz,
1987), p. 179 \[--trans.\].
[62](#c2-note-0062a){#c2-note-0062} "'When Attitude Becomes Form' at
the Fondazione Prada," *Contemporary Art Daily* (September 18, 2013),
online.
[63](#c2-note-0063a){#c2-note-0063} Owing to the hyper-capitalization
of the art market, which has been going on since the 1990s, this role
has shifted somewhat from curators to collectors, who, though validating
their choices more on financial than on argumentative grounds, are
essentially engaged in the same activity. Today, leading curators
usually work closely together with collectors and thus deal with more
money than the first generation of curators ever could have imagined.
[64](#c2-note-0064a){#c2-note-0064} Diedrich Diederichsen, "Showfreaks
und Monster," *Texte zur Kunst* 71 (2008): 69--77.
[65](#c2-note-0065a){#c2-note-0065} Alexander R. Galloway, *Protocol:
How Control Exists after Decentralization* (Cambridge, MA: MIT Press,
2004), pp. 7, 75.
[66](#c2-note-0066a){#c2-note-0066} Even the *Frankfurter Allgemeine
Zeitung* -- at least in its online edition -- has begun to publish more
and more articles in English. The newspaper has accepted the
disadvantage of higher editorial costs in order to remain relevant in
the increasingly globalized debate.
[67](#c2-note-0067a){#c2-note-0067} Joseph Reagle, "'Free as in
Sexist?' Free Culture and the Gender Gap," *First Monday* 18 (2013),
online.
[68](#c2-note-0068a){#c2-note-0068} Wikipedia\'s own "Editor Survey"
from 2011 reports a women\'s quota of 9 percent. Other studies have come
to a slightly higher number. See Benjamin Mako Hill and Aaron Shaw, "The
Wikipedia Gender Gap Revisited: Characterizing Survey Response Bias with
Propensity Score Estimation," *PLOS ONE* 8 (July 26, 2013), online. The
problem is well known, and the Wikipedia Foundation has been making
efforts to correct matters. In 2011, its goal was to increase the
participation of women to 25 percent by 2015. This has not been
achieved.[]{#Page_191 type="pagebreak" title="191"}
[69](#c2-note-0069a){#c2-note-0069} Shyong (Tony) K. Lam et al. (2011),
"WP: Clubhouse? An Exploration of Wikipedia's Gender Imbalance,"
*WikiSym* 11 (2011), online.
[70](#c2-note-0070a){#c2-note-0070} David Singh Grewal, *Network Power:
The Social Dynamics of Globalization* (New Haven, CT: Yale University
Press, 2008).
[71](#c2-note-0071a){#c2-note-0071} Ibid., p. 29.
[72](#c2-note-0072a){#c2-note-0072} Niklas Luhmann, *Macht im System*
(Berlin: Suhrkamp, 2013), p. 52 \[--trans.\].
[73](#c2-note-0073a){#c2-note-0073} Mathieu O\'Neil, *Cyberchiefs:
Autonomy and Authority in Online Tribes* (London: Pluto Press, 2009).
[74](#c2-note-0074a){#c2-note-0074} Eric Steven Raymond, "The Cathedral
and the Bazaar," *First Monday* 3 (1998), online.
[75](#c2-note-0075a){#c2-note-0075} Jorge Luis Borges, "The Library of
Babel," trans. Anthony Kerrigan, in Borges, *Ficciones* (New York: Grove
Weidenfeld, 1962), pp. 79--88.
[76](#c2-note-0076a){#c2-note-0076} Heinrich Geiselberger and Tobias
Moorstedt (eds), *Big Data: Das neue Versprechen der Allwissenheit*
(Berlin: Suhrkamp, 2013).
[77](#c2-note-0077a){#c2-note-0077} This is one of the central tenets
of science and technology studies. See, for instance, Geoffrey C. Bowker
and Susan Leigh Star, *Sorting Things Out: Classification and Its
Consequences* (Cambridge, MA: MIT Press, 1999).
[78](#c2-note-0078a){#c2-note-0078} Sybille Krämer, *Symbolische
Maschinen: Die Idee der Formalisierung in geschichtlichem Abriß*
(Darmstadt: Wissenschaftliche Buchgesellschaft, 1988), 50--69.
[79](#c2-note-0079a){#c2-note-0079} Quoted from Doron Swade, "The
'Unerring Certainty of Mechanical Agency': Machines and Table Making in
the Nineteenth Century," in Martin Campbell-Kelly et al. (eds), *The
History of Mathematical Tables: From Sumer to Spreadsheets* (Oxford:
Oxford University Press, 2003), pp. 145--76, at 150.
[80](#c2-note-0080a){#c2-note-0080} The mechanical construction
suggested by Leibniz was not to be realized as a practically usable (and
therefore patentable) calculating machine until 1820, by which point it
was referred to as an "arithmometer."
[82](#c2-note-0082a){#c2-note-0082} Charles Babbage, *On the Economy of
Machinery and Manufactures* (London: Charles Knight, 1832), p. 153: "We
have already mentioned what may, perhaps, appear paradoxical to some of
our readers -- that the division of labour can be applied with equal
success to mental operations, and that it ensures, by its adoption, the
same economy of time."
[83](#c2-note-0083a){#c2-note-0083} This structure, which is known as
"Von Neumann architecture," continues to form the basis of almost all
computers.
[84](#c2-note-0084a){#c2-note-0084} "Gordon Moore Says Aloha to
Moore\'s Law," *The Inquirer* (April 13, 2005), online.[]{#Page_192
type="pagebreak" title="192"}
[85](#c2-note-0085a){#c2-note-0085} Miriam Meckel, *Next: Erinnerungen
an eine Zukunft ohne uns* (Reinbeck bei Hamburg: Rowohlt, 2011). One
could also say that this anxiety has been caused by the fact that the
automation of labor has begun to affect middle-class jobs as well.
[86](#c2-note-0086a){#c2-note-0086} Steven Levy, "Can an Algorithm
Write a Better News Story than a Human Reporter?" *Wired* (April 24,
2012), online.
[87](#c2-note-0087a){#c2-note-0087} Alexander Pschera, *Animal
Internet: Nature and the Digital Revolution*, trans. Elisabeth Laufer
(New York: New Vessel Press, 2016).
[88](#c2-note-0088a){#c2-note-0088} The American intelligence services
are not unique in this regard. *Spiegel* has reported that, in Russia,
entire "bot armies" have been mobilized for the "propaganda battle."
Benjamin Bidder, "Nemzow-Mord: Die Propaganda der russischen Hardliner,"
*Spiegel Online* (February 28, 2015), online.
[89](#c2-note-0089a){#c2-note-0089} Lennart Guldbrandsson, "Swedish
Wikipedia Surpasses 1 Million Articles with Aid of Article Creation
Bot," [blog.wikimedia.org](http://blog.wikimedia.org) (June 17, 2013),
online.
[90](#c2-note-0090a){#c2-note-0090} Thomas Bunnell, "The Mathematics of
Film," *Boom Magazine* (November 2007): 48--51.
[91](#c2-note-0091a){#c2-note-0091} Christopher Steiner, "Automatons
Get Creative," *Wall Street Journal* (August 17, 2012), online.
[93](#c2-note-0093a){#c2-note-0093} Ian Ayres, *Super Crunchers: How
Anything Can Be Predicted* (London: Bookpoint, 2007).
[94](#c2-note-0094a){#c2-note-0094} Each of these models was tested on
the basis of the 50 million most common search terms from the years
2003--8 and classified according to the time and place of the search.
The results were compared with data from the health authorities. See
Jeremy Ginsberg et al., "Detecting Influenza Epidemics Using Search
Engine Query Data," *Nature* 457 (2009): 1012--4.
[95](#c2-note-0095a){#c2-note-0095} In absolute terms, the rate of
correct hits, at 15.8 percent, was still relatively low. With the same
dataset, however, random guessing would only have an accuracy of 0.005
percent. See V. Le Quoc et al., "Building High-Level Features Using
Large-Scale Unsupervised Learning,"
[research.google.com](http://research.google.com) (2012), online.
[96](#c2-note-0096a){#c2-note-0096} Neil Johnson et al., "Abrupt Rise
of New Machine Ecology beyond Human Response Time," *Nature: Scientific
Reports* 3 (2013), online. The authors counted 18,520 of these events
between January 2006 and February 2011; that is, about 15 per day on
average.
[97](#c2-note-0097a){#c2-note-0097} Gerald Nestler, "Mayhem in Mahwah:
The Case of the Flash Crash; or, Forensic Re-performance in Deep Time,"
in Anselm []{#Page_193 type="pagebreak" title="193"}Franke et al. (eds),
*Forensis: The Architecture of Public Truth* (Berlin: Sternberg Press,
2014), pp. 125--46.
[98](#c2-note-0098a){#c2-note-0098} Another facial recognition
algorithm by Google provides a good impression of the rate of progress.
As early as 2011, the latter was able to identify dogs in images with 80
percent accuracy. Three years later, this rate had not only increased to
93.5 percent (which corresponds to human capabilities), but the
algorithm could also identify more than 200 different types of dog,
something that hardly any person can do. See Robert McMillan, "This Guy
Beat Google\'s Super-Smart AI -- But It Wasn\'t Easy," *Wired* (January
15, 2015), online.
[99](#c2-note-0099a){#c2-note-0099} Sergey Brin and Lawrence Page, "The
Anatomy of a Large-Scale Hypertextual Web Search Engine," *Computer
Networks and ISDN Systems* 30 (1998): 107--17.
[100](#c2-note-0100a){#c2-note-0100} Eugene Garfield, "Citation Indexes
for Science: A New Dimension in Documentation through Association of
Ideas," *Science* 122 (1955): 108--11.
[101](#c2-note-0101a){#c2-note-0101} Since 1964, the data necessary for
this has been published as the Science Citation Index (SCI).
[102](#c2-note-0102a){#c2-note-0102} The assumption that the subjects
produce these structures indirectly and without any strategic intention
has proven to be problematic in both contexts. In the world of science,
there are so-called citation cartels -- groups of scientists who
frequently refer to one another\'s work in order to improve their
respective position in the SCI. Search engines have likewise given rise
to search engine optimizers, which attempt by various means to optimize
a website\'s evaluation by search engines.
[103](#c2-note-0103a){#c2-note-0103} Regarding the history of the SCI
and its influence on the early version of Google\'s PageRank, see Katja
Mayer, "Zur Soziometrik der Suchmaschinen: Ein historischer Überblick
der Methodik," in Konrad Becker and Felix Stalder (eds), *Deep Search:
Die Politik des Suchens jenseits von Google* (Innsbruck: Studienverlag,
2009), pp. 64--83.
[104](#c2-note-0104a){#c2-note-0104} A site with zero links to it could
not be registered by the algorithm at all, for the search engine indexed
the web by having its "crawler" follow the links itself.
[106](#c2-note-0106a){#c2-note-0106} Martin Feuz et al., "Personal Web
Searching in the Age of Semantic Capitalism: Diagnosing the Mechanisms
of Personalisation," *First Monday* 17 (2011), online.
[108](#c2-note-0108a){#c2-note-0108} Thus, it is not only the world of
advertising that motivates the collection of personal information. Such
information is also needed for the development of personalized
algorithms that []{#Page_194 type="pagebreak" title="194"}give order to
the flood of data. It can therefore be assumed that the rampant
collection of personal information will not cease or slow down even if
commercial demands happen to change, for instance to a business model
that is not based on advertising.
[109](#c2-note-0109a){#c2-note-0109} For a detailed discussion of how
these three levels are recorded, see Felix Stalder and Christine Mayer,
"Der zweite Index: Suchmaschinen, Personalisierung und Überwachung," in
Konrad Becker and Felix Stalder (eds), *Deep Search: Die Politik des
Suchens jenseits von Google* (Innsbruck: Studienverlag, 2009), pp.
112--31.
[110](#c2-note-0110a){#c2-note-0110} This raises the question of which
drivers should be sent on a detour, so that no traffic jam comes about,
and which should be shown the most direct route, which would now be
traffic-free.
[112](#c2-note-0112a){#c2-note-0112} Lisa Gitelman (ed.), *"Raw Data"
Is an Oxymoron* (Cambridge, MA: MIT Press, 2013).
[113](#c2-note-0113a){#c2-note-0113} The terms "raw," in the sense of
unprocessed, and "cooked," in the sense of processed, derive from the
anthropologist Claude Lévi-Strauss, who introduced them to clarify the
difference between nature and culture. See Claude Lévi-Strauss, *The Raw
and the Cooked*, trans. John Weightman and Doreen Weightman (Chicago,
IL: University of Chicago Press, 1983).
[114](#c2-note-0114a){#c2-note-0114} Jessica Lee, "No. 1 Position in
Google Gets 33% of Search Traffic," *Search Engine Watch* (June 20,
2013), online.
[115](#c2-note-0115a){#c2-note-0115} One estimate that continues to be
cited quite often is already obsolete: Michael K. Bergman, "White Paper
-- The Deep Web: Surfacing Hidden Value," *Journal of Electronic
Publishing* 7 (2001), online. The more content is dynamically generated
by databases, the more questionable such estimates become. It is
uncontested, however, that only a small portion of online information is
registered by search engines.
[116](#c2-note-0116a){#c2-note-0116} Theo Röhle, "Die Demontage der
Gatekeeper: Relationale Perspektiven zur Macht der Suchmaschinen," in
Konrad Becker and Felix Stalder (eds), *Deep Search: Die Politik des
Suchens jenseits von Google* (Innsbruck: Studienverlag, 2009), pp.
133--48.
[117](#c2-note-0117a){#c2-note-0117} The phenomenon of preparing the
world to be recorded by algorithms is not restricted to digital
networks. As early as 1994 in Germany, for instance, a new sort of
typeface was introduced (the *Fälschungserschwerende Schrift*,
"forgery-impeding typeface") on license plates for the sake of machine
readability and facilitating automatic traffic control. To the human
eye, however, it appears somewhat misshapen and
disproportionate.[]{#Page_195 type="pagebreak" title="195"}
[118](#c2-note-0118a){#c2-note-0118} [Fairsearch.org](http://Fairsearch.org)
was officially supported by several of Google\'s competitors, including
Microsoft, TripAdvisor, and Oracle.
[119](#c2-note-0119a){#c2-note-0119} "Antitrust: Commission Sends
Statement of Objections to Google on Comparison Shopping Service,"
*European Commission: Press Release Database* (April 15, 2015), online.
[120](#c2-note-0120a){#c2-note-0120} Amit Singhal, "An Update to Our
Search Algorithms," *Google Inside Search* (August 10, 2012), online. By
the middle of 2014, according to some sources, Google had received
around 20 million requests to remove links from its index on account of
copyright violations.
[121](#c2-note-0121a){#c2-note-0121} Alexander Wragge, "Google-Ranking:
Herabstufung ist 'Zensur light'," *iRights.info* (August 23, 2012),
online.
[122](#c2-note-0122a){#c2-note-0122} Farhad Manjoo,"Amazon\'s Tactics
Confirm Its Critics\' Worst Suspicions," *New York Times: Bits Blog*
(May 23, 2014), online.
[123](#c2-note-0123a){#c2-note-0123} Lucas D. Introna and Helen
Nissenbaum, "Shaping the Web: Why the Politics of Search Engines
Matters," *Information Society* 16 (2000): 169--85, at 181.
[124](#c2-note-0124a){#c2-note-0124} Eli Pariser, *The Filter Bubble:
How the New Personalized Web Is Changing What We Read and How We Think*
(New York: Penguin, 2012).
[125](#c2-note-0125a){#c2-note-0125} Antoinette Rouvroy, "The End(s) of
Critique: Data-Behaviourism vs. Due-Process," in Katja de Vries and
Mireille Hildebrandt (eds), *Privacy, Due Process and the Computational
Turn: The Philosophy of Law Meets the Philosophy of Technology* (New
York: Routledge, 2013), pp. 143--65.
[126](#c2-note-0126a){#c2-note-0126} See B. F. Skinner, *Science and
Human Behavior* (New York: The Free Press, 1953), p. 35: "We undertake
to predict and control the behavior of the individual organism. This is
our 'dependent variable' -- the effect for which we are to find the
cause. Our 'independent variables' -- the causes of behavior -- are the
external conditions of which behavior is a function."
[127](#c2-note-0127a){#c2-note-0127} Nathan Jurgenson, "View from
Nowhere: On the Cultural Ideology of Big Data," *New Inquiry* (October
9, 2014), online.
[128](#c2-note-0128a){#c2-note-0128} danah boyd and Kate Crawford,
"Critical Questions for Big Data: Provocations for a Cultural,
Technological and Scholarly Phenomenon," *Information, Communication &
Society* 15 (2012): 662--79.
:::
:::
1. [Preface to the English Edition](#fpref)
2. [Acknowledgments](#ack)
3. [Introduction: After the End of the Gutenberg Galaxy](#cintro)
1. [Notes](#f6-ntgp-9999)
4. [I: Evolution](#c1)
1. [The Expansion of the Social Basis of Culture](#c1-sec-0002)
2. [The Culturalization of the World](#c1-sec-0006)
3. [The Technologization of Culture](#c1-sec-0009)
4. [From the Margins to the Center of Society](#c1-sec-0013)
5. [Notes](#c1-ntgp-9999)
5. [II: Forms](#c2)
1. [Referentiality](#c2-sec-0002)
2. [Communality](#c2-sec-0009)
3. [Algorithmicity](#c2-sec-0018)
4. [Notes](#c2-ntgp-9999)
6. [III: Politics](#c3)
1. [Post-democracy](#c3-sec-0002)
2. [Commons](#c3-sec-0011)
3. [Against a Lack of Alternatives](#c3-sec-0017)
4. [Notes](#c3-ntgp-9999)
[Preface to the English Edition]{.chapterTitle} {#fpref}
::: {.section}
This book posits that we in the societies of the (transatlantic) West
find ourselves in a new condition. I call it "the digital condition"
because it gained its dominance as computer networks became established
as the key infrastructure for virtually all aspects of life. However,
the emergence of this condition pre-dates computer networks. In fact, it
has deep historical roots, some of which go back to the late nineteenth
century, but it really came into being after the late 1960s. As many of
the cultural and political institutions shaped by the previous condition
-- which McLuhan called the Gutenberg Galaxy -- fell into crisis, new
forms of personal and collective orientation and organization emerged
which have been shaped by the affordances of this new condition. Both
the historical processes which unfolded over a very long time and the
structural transformation which took place in a myriad of contexts have
been beyond any deliberate influence. Although obviously caused by
social actors, the magnitude of such changes was simply too great, too
distributed, and too complex to be attributed to, or molded by, any
particular (set of) actor(s).
Yet -- and this is the core of what motivated me to write this book --
this does not mean that we have somehow moved beyond the political,
beyond the realm in which identifiable actors and their projects do
indeed shape our collective []{#Page_vii type="pagebreak"
title="vii"}existence, or that there are no alternatives to future
development already expressed within contemporary dynamics. On the
contrary, we can see very clearly that as the center -- the established
institutions shaped by the affordances of the previous condition -- is
crumbling, more economic and political projects are rushing in to fill
that void with new institutions that advance their competing agendas.
These new institutions are well adapted to the digital condition, with
its chaotic production of vast amounts of information and innovative
ways of dealing with that.
From this, two competing trajectories have emerged which are
simultaneously transforming the space of the political. First, I used
the term "post-democracy" because it expands possibilities, and even
requirements, of (personal) participation, while ever larger aspects of
(collective) decision-making are moved to arenas that are structurally
disconnected from those of participation. In effect, these arenas are
forming an authoritarian reality in which a small elite is vastly
empowered at the expense of everyone else. The purest incarnation of
this tendency can be seen in the commercial social mass media, such as
Facebook, Google, and the others, as they were newly formed in this
condition and have not (yet) had to deal with the complications of
transforming their own legacy.
For the other trajectory, I applied the term "commons" because it
expands both the possibilities of personal participation and agency, and
those of collective decision-making. This tendency points to a
redefinition of democracy beyond the hollowed-out forms of political
representation characterizing the legacy institutions of liberal
democracy. The purest incarnation of this tendency can be found in the
institutions that produce the digital commons, such as Wikipedia and the
various Free Software communities whose work has been and still is
absolutely crucial for the infrastructural dimensions of the digital
networks. They are the most advanced because, again, they have not had
to deal with institutional legacies. But both tendencies are no longer
confined to digital networks and are spreading across all aspects of
social life, creating a reality that is, on the structural level,
surprisingly coherent and, on the social and political level, full of
contradictions and thus opportunities.[]{#Page_viii type="pagebreak"
title="viii"}
I traced some aspects of these developments right up to early 2016, when
the German version of this book went into production. Since then a lot
has happened, but I resisted the temptation to update the book for the
English translation because ideas are always an expression of their
historical moment and, as such, updating either turns into a completely
new version or a retrospective adjustment of the historical record.
What has become increasingly obvious during 2016 and into 2017 is that
central institutions of liberal democracy are crumbling more quickly and
dramatically than was expected. The race to replace them has kicked into
high gear. The main events driving forward an authoritarian renewal of
politics took place on a national level, in particular the vote by the
UK to leave the EU (Brexit) and the election of Donald Trump to the
office of president of the United States of America. The main events
driving the renewal of democracy took place on a metropolitan level,
namely the emergence of a network of "rebel cities," led by Barcelona
and Madrid. There, community-based social movements established their
candidates in the highest offices. These cities are now putting in place
practical examples that other cities could emulate and adapt. For the
concerns of this book, the most important concept put forward is that of
"technological sovereignty": to bring the technological infrastructure,
and its developmental potential, back under the control of those who are
using it and are affected by it; that is, the citizens of the
metropolis.
Over the last 18 months, the imbalances between the two trajectories
have become even more extreme because authoritarian tendencies and
surveillance capitalism have been strengthened more quickly than the
commons-oriented practices could establish themselves. But it does not
change the fact that there are fundamental alternatives embedded in the
digital condition. Despite structural transformations that affect how we
do things, there is no inevitability about what we want to do
individually and, even more importantly, collectively.
::: {.section}
While it may be conventional to cite one person as the author of a book,
writing is a process with many collective elements. This book in
particular draws upon many sources, most of which I am no longer able to
acknowledge with any certainty. Far too often, important references came
to me in parenthetical remarks, in fleeting encounters, during trips, at
the fringes of conferences, or through discussions of things that,
though entirely new to me, were so obvious to others as not to warrant
any explication. Often, too, my thinking was influenced by long
conversations, and it is impossible for me now to identify the precise
moments of inspiration. As far as the themes of this book are concerned,
four settings were especially important. The international discourse
network "nettime," which has a mailing list of 4,500 members and which I
have been moderating since the late 1990s, represents an inexhaustible
source of internet criticism and, as a collaborative filter, has enabled
me to follow a wide range of developments from a particular point of
view. I am also indebted to the Zurich University of the Arts, where I
have taught for more than 10 years and where the students have been
willing to explain to me, again and again, what is already self-evident
to them. Throughout my time there, I have been able to observe a
dramatic shift. For today\'s students, the "new" is no longer new but
simply obvious, whereas they []{#Page_x type="pagebreak" title="x"}have
experienced many things previously regarded as normal -- such as
checking out a book from a library (instead of downloading it) -- as
needlessly complicated. In Vienna, the hub of my life, the World
Information Institute has for many years provided a platform for
conferences, publications, and interventions that have repeatedly raised
the stakes of the discussion and have brought together the most
interesting range of positions without regard to any disciplinary
boundaries. Housed in Vienna, too, is the Technopolitics Project, a
non-institutionalized circle of researchers and artists whose
discussions of techno-economic paradigms have informed this book in
fundamental ways and which has offered multiple opportunities for me to
workshop inchoate ideas.
Not everything, however, takes place in diffuse conversations and
networks. I was also able to rely on the generous support of several
individuals who, at one stage or another, read through, commented upon,
and made crucial improvements to the manuscript: Leonhard Dobusch,
Günther Hack, Katja Meier, Florian Cramer, Cornelia Sollfrank, Beat
Brogle, Volker Grassmuck, Ursula Stalder, Klaus Schönberger, Konrad
Becker, Armin Medosch, Axel Stockburger, and Gerald Nestler. Special
thanks are owed to Rebina Erben-Hartig, who edited the original German
manuscript and greatly improved its readability. I am likewise grateful
to Heinrich Greiselberger and Christian Heilbronn of the Suhrkamp
Verlag, whose faith in the book never wavered despite several delays.
Regarding the English version at hand, it has been a privilege to work
with a translator as skillful as Valentine Pakis. Over the past few
years, writing this book might have been the most important project in
my life had it not been for Andrea Mayr. In this regard, I have been
especially fortunate.[]{#Page_xi type="pagebreak"
title="xi"}[]{#Page_xii type="pagebreak" title="xii"}
:::
Introduction [After the End of the Gutenberg Galaxy]{.chapterTitle} []{.chapterSubTitle} {#cintro}
::: {.section}
The show had already been going on for more than three hours, but nobody
was bothered by this. Quite the contrary. The tension in the venue was
approaching its peak, and the ratings were through the roof. Throughout
all of Europe, 195 million people were watching the spectacle on
television, and the social mass media were gaining steam. On Twitter,
more than 47,000 messages were being sent every minute with the hashtag
\#Eurovision.[^1^](#f6-note-0001){#f6-note-0001a} The outcome was
decided shortly after midnight: Conchita Wurst, the bearded diva, was
announced the winner of the 2014 Eurovision Song Contest. Cheers erupted
as the public celebrated the victor -- but also itself. At long last,
there was more to the event than just another round of tacky television
programming ("This is Ljubljana calling!"). Rather, a statement was made
-- a statement in favor of tolerance and against homophobia, for
diversity and for the right to define oneself however one pleases. And
Europe sent this message in the midst of a crisis and despite ongoing
hostilities, not to mention all of the toxic rumblings that could be
heard about decadence, cultural decay, and Gayropa. Visibly moved, the
Austrian singer let out an exclamation -- "We are unity, and we are
unstoppable!" -- as she returned to the stage with wobbly knees to
accept the trophy.
With her aesthetically convincing performance, Conchita succeeded in
unleashing a strong desire for personal []{#Page_1 type="pagebreak"
title="1"}self-discovery, for community, and for overcoming stale
conventions. And she did this through a character that mainstream
society would have considered paradoxical and deviant not long ago but
has since come to understand: attractive beyond the dichotomy of man and
woman, explicitly artificial and yet entirely authentic. This peculiar
conflation of artificiality and naturalness is equally present in
Berndnaut Smilde\'s photographic work of a real indoor cloud (*Nimbus*,
2010) on the cover of this book. Conchita\'s performance was also on a
formal level seemingly paradoxical: extremely focused and completely
open. Unlike most of the other acts, she took the stage alone, and
though she hardly moved at all, she nevertheless incited the audience to
participate in numerous ways and genuinely to act out the motto of the
contest ("Join us!"). Throughout the early rounds of the competition,
the beard, which was at first so provocative, transformed into a
free-floating symbol that the public began to appropriate in various
ways. Men and women painted Conchita-like beards on their faces,
newspapers printed beards to be cut out, and fans crocheted beards. Not
only did someone Photoshop a beard on to a painting of Empress Sissi of
Austria, but King Willem-Alexander of the Netherlands even tweeted a
deceptively realistic portrait of his wife, Queen Máxima, wearing a
beard. From one of the biggest stages of all, the evening of Wurst\'s
victory conveyed an impression of how much the culture of Europe had
changed in recent years, both in terms of its content and its forms.
That which had long been restricted to subcultural niches -- the
fluidity of gender identities, appropriation as a cultural technique,
or the conflation of reception and production, for instance -- was now
part of the mainstream. Even while sitting in front of the television,
this mainstream was no longer just a private audience but rather a
multitude of singular producers whose networked activity -- on location
or on social mass media -- lent particular significance to the occasion
as a moment of collective self-perception.
It is more than half a century since Marshall McLuhan announced the end
of the Modern era, a cultural epoch that he called the Gutenberg Galaxy
in honor of the print medium by which it was so influenced. What was
once just an abstract speculation of media theory, however, now
describes []{#Page_2 type="pagebreak" title="2"}the concrete reality of
our everyday life. What\'s more, we have moved well past McLuhan\'s
diagnosis: the erosion of old cultural forms, institutions, and
certainties is not just something we affirm, but new ones have already
formed whose contours are easy to identify not only in niche sectors but
in the mainstream. Shortly before Conchita\'s triumph, Facebook thus
expanded the gender-identity options for its billion-plus users from 2
to 60. In addition to "male" and "female," users of the English version
of the site can now choose from among the following categories:
::: {.extract}
Agender, Androgyne, Androgynes, Androgynous, Asexual, Bigender, Cis, Cis
Female, Cis Male, Cis Man, Cis Woman, Cisgender, Cisgender Female,
Cisgender Male, Cisgender Man, Cisgender Woman, Female to Male (FTM),
Female to Male Trans Man, Female to Male Transgender Man, Female to Male
Transsexual Man, Gender Fluid, Gender Neutral, Gender Nonconforming,
Gender Questioning, Gender Variant, Genderqueer, Hermaphrodite,
Intersex, Intersex Man, Intersex Person, Intersex Woman, Male to Female
(MTF), Male to Female Trans Woman, Male to Female Transgender Woman,
Male to Female Transsexual Woman, Neither, Neutrois, Non-Binary, Other,
Pangender, Polygender, T\*Man, Trans, Trans Female, Trans Male, Trans
Man, Trans Person, Trans\*Female, Trans\*Male, Trans\*Man,
Trans\*Person, Trans\*Woman, Transexual, Transexual Female, Transexual
Male, Transexual Man, Transexual Person, Transexual Woman, Transgender
Female, Transgender Person, Transmasculine, T\*Woman, Two\*Person,
Two-Spirit, Two-Spirit Person.
:::
This enormous proliferation of cultural possibilities is an expression
of what I will refer to below as the digital condition. Far from being
universally welcomed, its growing presence has also instigated waves of
nostalgia, diffuse resentments, and intellectual panic. Conservative and
reactionary movements, which oppose such developments and desire to
preserve or even re-create previous conditions, have been on the rise.
Likewise in 2014, for instance, a cultural dispute broke out in normally
subdued Baden-Würtemberg over which forms of sexual partnership should
be mentioned positively in the sexual education curriculum. Its impetus
was a working paper released at the end of 2013 by the state\'s
[]{#Page_3 type="pagebreak" title="3"}Ministry of Culture. Among other
things, it proposed that adolescents "should confront their own sexual
identity and orientation \[...\] from a position of acceptance with
respect to sexual diversity."[^2^](#f6-note-0002){#f6-note-0002a} In a
short period of time, a campaign organized mainly through social mass
media collected more than 200,000 signatures in opposition to the
proposal and submitted them to the petitions committee at the state
parliament. At that point, the government responded by putting the
initiative on ice. However, according to the analysis presented in this
book, leaving it on ice creates a precarious situation.
The rise and spread of the digital condition is the result of a
wide-ranging and irreversible cultural transformation, the beginnings of
which can in part be traced back to the nineteenth century. Since the
1960s, however, this shift has accelerated enormously and has
encompassed increasingly broader spheres of social life. More and more
people have been participating in cultural processes; larger and larger
dimensions of existence have become battlegrounds for cultural disputes;
and social activity has been intertwined with increasingly complex
technologies, without which it would hardly be possible to conceive of
these processes, let alone achieve them. The number of competing
cultural projects, works, reference points, and reference systems has
been growing rapidly. This, in turn, has caused an escalating crisis for
the established forms and institutions of culture, which are poorly
equipped to deal with such an inundation of new claims to meaning. Since
roughly the year 2000, many previously independent developments have
been consolidating, gaining strength and modifying themselves to form a
new cultural constellation that encompasses broad segments of society --
a new galaxy, as McLuhan might have
said.[^3^](#f6-note-0003){#f6-note-0003a} These days it is relatively
easy to recognize the specific forms that characterize it as a whole and
how these forms have contributed to new, contradictory and
conflict-laden political dynamics.
My argument, which is restricted to cultural developments in the
(transatlantic) West, is divided into three chapters. In the first, I
will outline the *historical* developments that have given rise to this
quantitative and qualitative change and have led to the crisis faced by
the institutions of the late phase of the Gutenberg Galaxy, which
defined the last third []{#Page_4 type="pagebreak" title="4"}of the
twentieth century.[^4^](#f6-note-0004){#f6-note-0004a} The expansion of
the social basis of cultural processes will be traced back to changes in
the labor market, to the self-empowerment of marginalized groups, and to
the dissolution of centralized cultural geography. The broadening of
cultural fields will be discussed in terms of the rise of design as a
general creative discipline, and the growing significance of complex
technologies -- as fundamental components of everyday life -- will be
tracked from the beginnings of independent media up to the development
of the internet as a mass medium. These processes, which at first
unfolded on their own and may have been reversible on an individual
basis, are integrated today and represent a socially dominant component
of the coherent digital condition. From the perspective of cultural
studies and media theory, the second chapter will delineate the already
recognizable features of this new culture. Concerned above all with the
analysis of forms, its focus is thus on the question of "how" cultural
practices operate. It is only because specific forms of culture,
exchange, and expression are prevalent across diverse varieties of
content, social spheres, and locations that it is even possible to speak
of the digital condition in the singular. Three examples of such forms
stand out in particular. *Referentiality* -- that is, the use of
existing cultural materials for one\'s own production -- is an essential
feature of many methods for inscribing oneself into cultural processes.
In the context of unmanageable masses of shifting and semantically open
reference points, the act of selecting things and combining them has
become fundamental to the production of meaning and the constitution of
the self. The second feature that characterizes these processes is
*communality*. It is only through a collectively shared frame of
reference that meanings can be stabilized, possible courses of action
can be determined, and resources can be made available. This has given
rise to communal formations that generate self-referential worlds, which
in turn modulate various dimensions of existence -- from aesthetic
preferences to the methods of biological reproduction and the rhythms of
space and time. In these worlds, the dynamics of network power have
reconfigured notions of voluntary and involuntary behavior, autonomy,
and coercion. The third feature of the new cultural landscape is its
*algorithmicity*. It is characterized, in other []{#Page_5
type="pagebreak" title="5"}words, by automated decision-making processes
that reduce and give shape to the glut of information, by extracting
information from the volume of data produced by machines. This extracted
information is then accessible to human perception and can serve as the
basis of singular and communal activity. Faced with the enormous amount
of data generated by people and machines, we would be blind were it not
for algorithms.
The third chapter will focus on *political dimensions*. These are the
factors that enable the formal dimensions described in the preceding
chapter to manifest themselves in the form of social, political, and
economic projects. Whereas the first chapter is concerned with long-term
and irreversible historical processes, and the second outlines the
general cultural forms that emerged from these changes with a certain
degree of inevitability, my concentration here will be on open-ended
dynamics that can still be influenced. A contrast will be made between
two political tendencies of the digital condition that are already quite
advanced: *post-democracy* and *commons*. Both take full advantage of
the possibilities that have arisen on account of structural changes and
have advanced them even further, though in entirely different
directions. "Post-democracy" refers to strategies that counteract the
enormously expanded capacity for social communication by disconnecting
the possibility to participate in things from the ability to make
decisions about them. Everyone is allowed to voice his or her opinion,
but decisions are ultimately made by a select few. Even though growing
numbers of people can and must take responsibility for their own
activity, they are unable to influence the social conditions -- the
social texture -- under which this activity has to take place. Social
mass media such as Facebook and Google will receive particular attention
as the most conspicuous manifestations of this tendency. Here, under new
structural provisions, a new combination of behavior and thought has
been implemented that promotes the normalization of post-democracy and
contributes to its otherwise inexplicable acceptance in many areas of
society. "Commons," on the contrary, denotes approaches for developing
new and comprehensive institutions that not only directly combine
participation and decision-making but also integrate economic, social,
and ethical spheres -- spheres that Modernity has tended to keep
apart.[]{#Page_6 type="pagebreak" title="6"}
Post-democracy and commons can be understood as two lines of development
that point beyond the current crisis of liberal democracy and represent
new political projects. One can be characterized as an essentially
authoritarian system, the other as a radical expansion and renewal of
democracy, from the notion of representation to that of participation.
Even though I have brought together a number of broad perspectives, I
have refrained from discussing certain topics that a book entitled *The
Digital Condition* might be expected to address, notably the matter of
copyright, for one example. This is easy to explain. As regards the new
forms at the heart of this book, none of these developments requires or
justifies copyright law in its present form. In any case, my thoughts on
the matter were published not long ago in another book, so there is no
need to repeat them here.[^5^](#f6-note-0005){#f6-note-0005a} The theme
of privacy will also receive little attention. This is not because I
share the view, held by proponents of "post-privacy," that it would be
better for all personal information to be made available to everyone. On
the contrary, this position strikes me as superficial and naïve. That
said, the political function of privacy -- to safeguard a degree of
personal autonomy from powerful institutions -- is based on fundamental
concepts that, in light of the developments to be described below,
urgently need to be updated. This is a task, however, that would take me
far beyond the scope of the present
book.[^6^](#f6-note-0006){#f6-note-0006a}
Before moving on to the first chapter, I should first briefly explain my
somewhat unorthodox understanding of the central concepts in the title
of the book -- "condition" and "digital." In what follows, the term
"condition" will be used to designate a cultural condition whereby the
processes of social meaning -- that is, the normative dimension of
existence -- are explicitly or implicitly negotiated and realized by
means of singular and collective activity. Meaning, however, does not
manifest itself in signs and symbols alone; rather, the practices that
engender it and are inspired by it are consolidated into artifacts,
institutions, and lifeworlds. In other words, far from being a symbolic
accessory or mere overlay, culture in fact directs our actions and gives
shape to society. By means of materialization and repetition, meaning --
both as claim and as reality -- is made visible, productive, and
negotiable. People are free to accept it, reject it, or ignore
[]{#Page_7 type="pagebreak" title="7"}it altogether. Social meaning --
that is, meaning shared by multiple people -- can only come about
through processes of exchange within larger or smaller formations.
Production and reception (to the extent that it makes any sense to
distinguish between the two) do not proceed linearly here, but rather
loop back and reciprocally influence one another. In such processes, the
participants themselves determine, in a more or less binding manner, how
they stand in relation to themselves, to each other, and to the world,
and they determine the frame of reference in which their activity is
oriented. Accordingly, culture is not something static or something that
is possessed by a person or a group, but rather a field of dispute that
is subject to the activities of multiple ongoing changes, each happening
at its own pace. It is characterized by processes of dissolution and
constitution that may be collaborative, oppositional, or simply
operating side by side. The field of culture is pervaded by competing
claims to power and mechanisms for exerting it. This leads to conflicts
about which frames of reference should be adopted for different fields
and within different social groups. In such conflicts,
self-determination and external determination interact until a point is
reached at which both sides are mutually constituted. This, in turn,
changes the conditions that give rise to shared meaning and personal
identity.
In what follows, this broadly post-structuralist perspective will inform
my discussion of the causes and formational conditions of cultural
orders and their practices. Culture will be conceived throughout as
something heterogeneous and hybrid. It draws from many sources; it is
motivated by the widest possible variety of desires, intentions, and
compulsions; and it mobilizes whatever resources might be necessary for
the constitution of meaning. This emphasis on the materiality of culture
is also reflected in the concept of the digital. Media are relational
technologies, which means that they facilitate certain types of
connection between humans and
objects.[^7^](#f6-note-0007){#f6-note-0007a} "Digital" thus denotes the
set of relations that, on the infrastructural basis of digital networks,
is realized today in the production, use, and transformation of
material and immaterial goods, and in the constitution and coordination
of personal and collective activity. In this regard, the focus is less
on the dominance of a certain class []{#Page_8 type="pagebreak"
title="8"}of technological artifacts -- the computer, for instance --
and even less on distinguishing between "digital" and "analog,"
"material" and "immaterial." Even in the digital condition, the analog
has not gone away. Rather, it has been re-evaluated and even partially
upgraded. The immaterial, moreover, is never entirely without
materiality. On the contrary, the fleeting impulses of digital
communication depend on global and unmistakably material infrastructures
that extend from mines beneath the surface of the earth, from which rare
earth metals are extracted, all the way into outer space, where
satellites are circling around above us. Such things may be ignored
because they are outside the experience of everyday life, but that does
not mean that they have disappeared or that they are of any less
significance. "Digital" thus refers to historically new possibilities
for constituting and connecting various human and non-human actors,
which is not limited to digital media but rather appears everywhere as a
relational paradigm that alters the realm of possibility for numerous
materials and actors. My understanding of the digital thus approximates
the concept of the "post-digital," which has been gaining currency over
the past few years within critical media cultures. Here, too, the
distinction between "new" and "old" media and all of the ideological
baggage associated with it -- for instance, that the new represents the
future while the old represents the past -- have been rejected. The
aesthetic projects that continue to define the image of the "digital" --
immateriality, perfection, and virtuality -- have likewise been
discarded.[^8^](#f6-note-0008){#f6-note-0008a} Above all, the
"post-digital" is a critical response to this techno-utopian aesthetic
and its attendant economic and political perspectives. According to the
cultural theorist Florian Cramer, the concept accommodates the fact that
"new ethical and cultural conventions which became mainstream with
internet communities and open-source culture are being retroactively
applied to the making of non-digital and post-digital media
products."[^9^](#f6-note-0009){#f6-note-0009a} He thus cites the trend
that process-based practices oriented toward open interaction, which
first developed within digital media, have since begun to appear in more
and more contexts and in an increasing number of
materials.[^10[]{#Page_9 type="pagebreak"
title="9"}^](#f6-note-0010){#f6-note-0010a}
For the historical, cultural-theoretical, and political perspectives
developed in this book, however, the concept of the post-digital is
somewhat problematic, for it requires the narrow context of media art
and its fixation on technology in order to become a viable
counter-position. Without this context, certain misunderstandings are
impossible to avoid. The prefix "post-," for instance, is often
interpreted in the sense that something is over or that we have at least
grasped the matters at hand and can thus turn to something new. The
opposite is true. The most enduringly relevant developments are only now
beginning to adopt a specific form, long after digital infrastructures
and the practices made popular by them have become part of our everyday
lives. Or, as the communication theorist and consultant Clay Shirky puts
it, "Communication tools don\'t get socially interesting until they get
technologically boring."[^11^](#f6-note-0011){#f6-note-0011a} For it is
only today, now that our fascination for this technology has waned and
its promises sound hollow, that culture and society are being defined by
the digital condition in a comprehensive sense. Before, this was the
case in just a few limited spheres. It is this hybridization and
solidification of the digital -- the presence of the digital beyond
digital media -- that lends the digital condition its dominance. As to
the concrete realities in which these things will materialize, this is
currently being decided in an open and ongoing process. The aim of this
book is to contribute to our understanding of this process.[]{#Page_10
type="pagebreak" title="10"}
:::
::: {.section .notesList}
[1](#f6-note-0001a){#f6-note-0001} Dan Biddle, "Five Million Tweets for
\#Eurovision 2014," *Twitter UK* (May 11, 2014), online.
[2](#f6-note-0002a){#f6-note-0002} Ministerium für Kultus, Jugend und
Sport -- Baden-Württemberg, "Bildungsplanreform 2015/2016 -- Verankerung
von Leitprinzipien," online \[--trans.\].
[3](#f6-note-0003a){#f6-note-0003} As early as 1995, Wolfgang Coy
suggested that McLuhan\'s metaphor should be supplanted by the concept
of the "Turing Galaxy," but this never caught on. See his introduction
to the German edition of *The Gutenberg Galaxy*: "Von der Gutenbergschen
zur Turingschen Galaxis: Jenseits von Buchdruck und Fernsehen," in
Marshall McLuhan, *Die Gutenberg Galaxis: Das Ende des Buchzeitalters*,
(Cologne: Addison-Wesley, 1995), pp. vii--xviii.[]{#Page_176
type="pagebreak" title="176"}
[4](#f6-note-0004a){#f6-note-0004} According to the analysis of the
Spanish sociologist Manuel Castells, this crisis began almost
simultaneously in highly developed capitalist and socialist societies,
and it did so for the same reason: the paradigm of "industrialism" had
reached the limits of its productivity. Unlike the capitalist societies,
which were flexible enough to tame the crisis and reorient their
economies, the socialism of the 1970s and 1980s experienced stagnation
until it ultimately, in a belated effort to reform, collapsed. See
Manuel Castells, *End of Millennium*, 2nd edn (Oxford: Wiley-Blackwell,
2010), pp. 5--68.
[5](#f6-note-0005a){#f6-note-0005} Felix Stalder, *Der Autor am Ende
der Gutenberg Galaxis* (Zurich: Buch & Netz, 2014).
[6](#f6-note-0006a){#f6-note-0006} For my preliminary thoughts on this
topic, see Felix Stalder, "Autonomy and Control in the Era of
Post-Privacy," *Open: Cahier on Art and the Public Domain* 19 (2010):
78--86; and idem, "Privacy Is Not the Antidote to Surveillance,"
*Surveillance & Society* 1 (2002): 120--4. For a discussion of these
approaches, see the working paper by Maja van der Velden, "Personal
Autonomy in a Post-Privacy World: A Feminist Technoscience Perspective"
(2011), online.
[7](#f6-note-0007a){#f6-note-0007} Accordingly, the "new social" media
are mass media in the sense that they influence broadly disseminated
patterns of social relations and thus shape society as much as the
traditional mass media had done before them.
[8](#f6-note-0008a){#f6-note-0008} Kim Cascone, "The Aesthetics of
Failure: 'Post-Digital' Tendencies in Contemporary Computer Music,"
*Computer Music Journal* 24/2 (2000): 12--18.
[10](#f6-note-0010a){#f6-note-0010} In the field of visual arts,
similar considerations have been made regarding "post-internet art." See
Artie Vierkant, "The Image Object Post-Internet,"
[jstchillin.org](http://jstchillin.org) (December 2010), online; and Ian
Wallace, "What Is Post-Internet Art? Understanding the Revolutionary New
Art Movement," *Artspace* (March 18, 2014), online.
[11](#f6-note-0011a){#f6-note-0011} Clay Shirky, *Here Comes Everybody:
The Power of Organizing without Organizations* (New York: Penguin,
2008), p. 105.
:::
:::
::: {.section}
Many authors have interpreted the new cultural realities that
characterize our daily lives as a direct consequence of technological
developments: the internet is to blame! This assumption is not only
empirically untenable; it also leads to a problematic assessment of the
current situation. Apparatuses are represented as "central actors," and
this suggests that new technologies have suddenly revolutionized a
situation that had previously been stable. Depending on one\'s point of
view, this is then regarded as "a blessing or a
curse."[^1^](#c1-note-0001){#c1-note-0001a} A closer examination,
however, reveals an entirely different picture. Established cultural
practices and social institutions had already been witnessing the
erosion of their self-evident justification and legitimacy, long before
they were faced with new technologies and the corresponding demands
these make on individuals. Moreover, the allegedly new types of
coordination and cooperation are also not so new after all. Many of them
have existed for a long time. At first most of them were totally
separate from the technologies for which, later on, they would become
relevant. It is only in retrospect that these developments can be
identified as beginnings, and it can be seen that much of what we regard
today as novel or revolutionary was in fact introduced at the margins of
society, in cultural niches that were unnoticed by the dominant actors
and institutions. The new technologies thus evolved against a
[]{#Page_11 type="pagebreak" title="11"}background of processes of
societal transformation that were already under way. They could only
have been developed once a vision of their potential had been
formulated, and they could only have been disseminated where demand for
them already existed. This demand was created by social, political, and
economic crises, which were themselves initiated by changes that were
already under way. The new technologies seemed to provide many differing
and promising answers to the urgent questions that these crises had
prompted. It was thus a combination of positive vision and pressure that
motivated a great variety of actors to change, at times with
considerable effort, the established processes, mature institutions, and
their own behavior. They intended to appropriate, for their own
projects, the various and partly contradictory possibilities that they
saw in these new technologies. Only then did a new technological
infrastructure arise.
This, in turn, created the preconditions for previously independent
developments to come together, strengthening one another and enabling
them to spread beyond the contexts in which they had originated. Thus,
they moved from the margins to the center of culture. And by
intensifying the crisis of previously established cultural forms and
institutions, they became dominant and established new forms and
institutions of their own.
:::
::: {.section}
The Expansion of the Social Basis of Culture {#c1-sec-0002}
--------------------------------------------
Watching television discussions from the 1950s and 1960s today, one is
struck not only by the billows of cigarette smoke in the studio but also
by the homogeneous spectrum of participants. Usually, it was a group of
white and heteronormatively behaving men speaking with one
another,[^2^](#c1-note-0002){#c1-note-0002a} as these were the people
who held the important institutional positions in the centers of the
West. As a rule, those involved were highly specialized representatives
from the cultural, economic, scientific, and political spheres. Above
all, they were legitimized to appear in public to articulate their
opinions, which were to be regarded by others as relevant and worthy of
discussion. They presided over the important debates of their time. With
few exceptions, other actors and their deviant opinions -- there
[]{#Page_12 type="pagebreak" title="12"}has never been a time without
them -- were either not taken seriously at all or were categorized as
indecent, incompetent, perverse, irrelevant, backward, exotic, or
idiosyncratic.[^3^](#c1-note-0003){#c1-note-0003a} Even at that time,
the social basis of culture was beginning to expand, though the actors
at the center of the discourse had failed to notice this. Communicative
and cultural processes were gaining significance in more and more
places, and excluded social groups were self-consciously developing
their own language in order to intervene in the discourse. The rise of
the knowledge economy, the increasingly loud critique of
heteronormativity, and a fundamental cultural critique posed by
post-colonialism enabled a greater number of people to participate in
public discussions. In what follows, I will subject each of these three
phenomena to closer examination. In order to do justice to their
complexity, I will treat them on different levels: I will depict the
rise of the knowledge economy as a structural change in labor; I will
reconstruct the critique of heteronormativity by outlining the origins
and transformations of the gay movement in West Germany; and I will
discuss post-colonialism as a theory that introduced new concepts of
cultural multiplicity and hybridization -- concepts that are now
influencing the digital condition far beyond the limits of the
post-colonial discourse, and often without any reference to this
discourse at all.
::: {.section}
### The growth of the knowledge economy {#c1-sec-0003}
At the beginning of the 1950s, the Austrian-American economist Fritz
Machlup was immersed in his study of the political economy of
monopoly.[^4^](#c1-note-0004){#c1-note-0004a} Among other things, he was
concerned with patents and copyright law. In line with the neo-classical
Austrian School, he considered both to be problematic (because
state-created) monopolies.[^5^](#c1-note-0005){#c1-note-0005a} The
longer he studied the monopoly of the patent system in particular, the
more far-reaching its consequences seemed to him. He maintained that the
patent system was intertwined with something that might be called the
"economy of invention" -- ultimately, patentable insights had to be
produced in the first place -- and that this was in turn part of a much
larger economy of knowledge. The latter encompassed government agencies
as well as institutions of education, research, and development
[]{#Page_13 type="pagebreak" title="13"}(that is, schools, universities,
and certain corporate laboratories), which had been increasing steadily
in number since Roosevelt\'s New Deal. Yet it also included the
expanding media sector and those industries that were responsible for
providing technical infrastructure. Machlup subsumed all of these
institutions and sectors under the concept of the "knowledge economy," a
term of his own invention. Their common feature was that essential
aspects of their activities consisted in communicating things to other
people ("telling anyone anything," as he put it). Thus, the employees
were not only recipients of information or instructions; rather, in one
way or another, they themselves communicated, be it merely as a
secretary who typed up, edited, and forwarded a piece of shorthand
dictation. In his book *The Production and Distribution of Knowledge in
the United States*, published in 1962, Machlup gathered empirical
material to demonstrate that the American economy had entered a new
phase that was distinguished by the production, exchange, and
application of abstract, codified
knowledge.[^6^](#c1-note-0006){#c1-note-0006a} This opinion was no
longer entirely novel at the time, but it had never before been
presented in such an empirically detailed and comprehensive
manner.[^7^](#c1-note-0007){#c1-note-0007a} The extent of the knowledge
economy surprised Machlup himself: in his book, he concluded that as
much as 43 percent of all labor activity was already engaged in this
sector. This high number came about because, until then, no one had put
forward the idea of understanding such a variety of activities as a
single unit.
Machlup\'s categorization was indeed quite innovative, for the dynamics
that propelled the sectors that he associated with one another not only
were very different but also had originated as an integral component in
the development of the industrial production of goods. They were more of
an extension of such production than a break with it. The production and
circulation of goods had been expanding and accelerating as early as the
nineteenth century, though at highly divergent rates from one region or
sector to another. New markets were created in order to distribute goods
that were being produced in greater numbers; new infrastructure for
transportation and communication was established in order to serve these
large markets, which were mostly in the form of national territories
(including their colonies). This []{#Page_14 type="pagebreak"
title="14"}enabled even larger factories to be built in order to
exploit, to an even greater extent, the cost advantages of mass
production. In order to control these complex processes, new professions
arose with different types of competencies and working conditions. The
office became a workplace for an increasing number of people -- men and
women alike -- who, in one form or another, had something to do with
information processing and communication. Yet all of this required not
only new management techniques. Production and products also became more
complex, so that entire corporate sectors had to be restructured.
Whereas the first decisive inventions of the industrial era were still
made by more or less educated tinkerers, during the last third of the
nineteenth century, invention itself came to be institutionalized. In
Germany, Siemens (founded in 1847 as the Telegraphen-Bauanstalt von
Siemens & Halske) exemplifies this transformation. Within 50 years, a
company that began in a proverbial workshop in a Berlin backyard became
a multinational high-tech corporation. It was in such corporate
laboratories, which were established around the year 1900, that the
"industrialization of invention" or the "scientification of industrial
production" took place.[^8^](#c1-note-0008){#c1-note-0008a} In other
words, even the processes employed in factories and the goods that they
produced became knowledge-intensive. Their invention, planning, and
production required a steadily growing expansion of activities, which
today we would refer to as research and development. The informatization
of the economy -- the acceleration of mass production, the comprehensive
application of scientific methods to the organization of labor, and the
central role of research and development in industry -- was hastened
enormously by a world war that was waged on an industrial scale to an
extent that had never been seen before.
Another important factor for the increasing significance of the
knowledge economy was the development of the consumer society. Over the
course of the last third of the nineteenth century, despite dramatic
regional and social disparities, an increasing number of people profited
from the economic growth that the Industrial Revolution had instigated.
Wages increased and basic needs were largely met, so that a new social
stratum arose, the middle class, which was able to spend part of its
income on other things. But on what? First, []{#Page_15 type="pagebreak"
title="15"}new needs had to be created. The more production capacities
increased, the more they had to be rethought in terms of consumption.
Thus, in yet another way, the economy became more knowledge-intensive.
It was now necessary to become familiar with, understand, and stimulate
the interests and preferences of consumers, in order to entice them to
purchase products that they did not urgently need. This knowledge did
little to enhance the material or logistical complexity of goods or
their production; rather, it was reflected in the increasingly extensive
communication about and through these goods. The beginnings of this
development were captured by Émile Zola in his 1883 novel *The Ladies\'
Paradise*, which was set in the new world of a semi-fictitious
department store bearing that name. In its opening scene, the young
protagonist Denise Baudu and her brother Jean, both of whom have just
moved to Paris from a provincial town, encounter for the first time the
artfully arranged women\'s clothing -- exhibited with all sorts of
tricks involving lighting, mirrors, and mannequins -- in the window
displays of the store. The sensuality of the staged goods is so
overwhelming that both of them are not only struck dumb, but Jean even
blushes.
It was the economy of affects that brought blood to Jean\'s cheeks. At
that time, strategies for attracting the attention of customers did not
yet have a scientific and systematic basis. Just as the first inventions
in the age of industrialization were made by amateurs, so too was the
economy of affects developed intuitively and gradually rather than as a
planned or conscious paradigm shift. That it was possible to induce and
direct affects by means of targeted communication was the pioneering
discovery of the Austrian-American Edward Bernays. During the 1920s, he
combined the ideas of his uncle Sigmund Freud about unconscious
motivations with the sociological research methods of opinion surveys to
form a new discipline: market
research.[^9^](#c1-note-0009){#c1-note-0009a} It became the scientific
basis of a new field of activity, which he at first called "propaganda"
but then later referred to as "public
relations."[^10^](#c1-note-0010){#c1-note-0010a} Public communication,
be it for economic or political ends, was now placed on a systematic
foundation that came to distance itself more and more from the pure
"conveyance of information." Communication became a strategic field for
corporate and political disputes, and the mass media []{#Page_16
type="pagebreak" title="16"}became their locus of negotiation. Between
1880 and 1917, for instance, commercial advertising costs in the United
States increased by more than 800 percent, and the leading advertising
firms, using the same techniques with which they attracted consumers to
products, were successful in selling to the American public the idea of
their nation entering World War I. Thus, a media industry in the modern
sense was born, and it expanded along with the rapidly growing market
for advertising.[^11^](#c1-note-0011){#c1-note-0011a}
In his studies of labor markets conducted at the beginning of the 1960s,
Machlup brought these previously separate developments together and
thus explained the existence of an already advanced knowledge economy in
the United States. His arguments fell on extremely fertile soil, for an
intellectual transformation had taken place in other areas of science as
well. A few years earlier, for instance, cybernetics had given the
concepts "information" and "communication" their first scientifically
precise (if somewhat idiosyncratic) definitions and had assigned to them
a position of central importance in all scientific disciplines, not to
mention life in general.[^12^](#c1-note-0012){#c1-note-0012a} Machlup\'s
investigation seemed to confirm this in the case of the economy, given
that the knowledge economy was primarily concerned with information and
communication. Since then, numerous analyses, formulas, and slogans have
repeated, modified, refined, and criticized the idea that the
knowledge-based activities of the economy have become increasingly
important. In the 1970s this discussion was associated above all with
the notion of the "post-industrial
society,"[^13^](#c1-note-0013){#c1-note-0013a} in the 1980s the guiding
idea was the "information society,"[^14^](#c1-note-0014){#c1-note-0014a}
and in the 1990s the debate revolved around the "network
society"[^15^](#c1-note-0015){#c1-note-0015a} -- to name just the most
popular concepts. What these approaches have in common is that they each
diagnose a comprehensive societal transformation that, as regards the
creation of economic value or jobs, has shifted the balance from
productive to communicative activities. Accordingly, they presuppose
that we know how to distinguish the former from the latter. This is not
unproblematic, however, because in practice the two are usually tightly
intertwined. Moreover, whoever maintains that communicative activities
have taken the place of industrial production in our society has adopted
a very narrow point of []{#Page_17 type="pagebreak" title="17"}view.
Factory jobs have not simply disappeared; they have just been partially
relocated outside of Western economies. The assertion that communicative
activities are somehow of "greater value" hardly chimes with the reality
of today\'s new "service jobs," many of which pay no more than the
minimum wage.[^16^](#c1-note-0016){#c1-note-0016a} Critiques of this
sort, however, have done little to reduce the effectiveness of this
analysis -- especially its political effectiveness -- for it does more
than simply describe a condition. It also contains a set of political
instructions that imply or directly demand that precisely those sectors
should be promoted that it considers economically promising, and that
society should be reorganized accordingly. Since the 1970s, there has
thus been a feedback loop between scientific analysis and political
agendas. More often than not, it is hardly possible to distinguish
between the two. Especially in Britain and the United States, the
economic transformation of the 1980s was imposed insistently and with
political calculation (the weakening of labor unions).
There are, however, important differences between the developments of
the so-called "post-industrial society" of the 1970s and those of the
so-called "network society" of the 1990s, even if both terms are
supposed to stress the increased significance of information, knowledge,
and communication. With regard to the digital condition, the most
important of these differences are the greater flexibility of economic
activity in general and employment relations in particular, as well as
the dismantling of social security systems. Neither phenomenon played
much of a role in analyses of the early 1970s. The development since
then can be traced back to two currents that could not seem more
different from one another. At first, flexibility was demanded in the
name of a critique of the value system imposed by bureaucratic-bourgeois
society (including the traditional organization of the workforce). It
originated in the new social movements that had formed in the late
1960s. Later on, toward the end of the 1970s, it then became one of the
central points of the neoliberal critique of the welfare state. With
completely different motives, both sides sang the praises of autonomy
and spontaneity while rejecting the disciplinary nature of hierarchical
organization. They demanded individuality and diversity rather than
conformity to prescribed roles. Experimentation, openness to []{#Page_18
type="pagebreak" title="18"}new ideas, flexibility, and change were now
established as fundamental values with positive connotations. Both
movements operated with the attractive idea of personal freedom. The new
social movements understood this in a social sense as the freedom of
personal development and coexistence, whereas neoliberals understood it
in an economic sense as the freedom of the market. In the 1980s, the
neoliberal ideas prevailed in large part because some of the values,
strategies, and methods propagated by the new social movements were
removed from their political context and appropriated in order to
breathe new life -- a "new spirit" -- into capitalism and thus to rescue
industrial society from its crisis.[^17^](#c1-note-0017){#c1-note-0017a}
An army of management consultants, restructuring experts, and new
companies began to promote flat hierarchies, self-responsibility, and
innovation; with these aims in mind, they set about reorganizing large
corporations into small and flexible units. Labor and leisure were no
longer supposed to be separated, for all aspects of a given person could
be integrated into his or her work. In order to achieve economic success
in this new capitalism, it became necessary for every individual to
identify himself or herself with his or her profession. Large
corporations were restructured in such a way that entire departments
found themselves transformed into independent "profit centers." This
happened in the name of creating more leeway for decision-making and of
optimizing the entrepreneurial spirit on all levels, the goals being to
increase value creation and to provide management with more fine-grained
powers of intervention. These measures, in turn, created the need for
computers and the need for them to be networked. Large corporations
reacted in this way to the emergence of highly specialized small
companies which, by networking and cooperating with other firms,
succeeded in quickly and flexibly exploiting niches in the expanding
global markets. In the management literature of the 1980s, the
catchphrases for this were "company networks" and "flexible
specialization."[^18^](#c1-note-0018){#c1-note-0018a} By the middle of
the 1990s, the sociologist Manuel Castells was able to conclude that the
actual productive entity was no longer the individual company but rather
the network consisting of companies and corporate divisions of various
sizes. In Castells\'s estimation, the decisive advantage of the network
is its ability to customize its elements and their configuration
[]{#Page_19 type="pagebreak" title="19"}to suit the rapidly changing
requirements of the "project" at
hand.[^19^](#c1-note-0019){#c1-note-0019a} Aside from a few exceptions,
companies in their traditional forms came to function above all as
strategic control centers and as economic and legal units.
This economic structural transformation was already well under way when
the internet emerged as a mass medium around the turn of the millennium.
As a consequence, change became more radical and penetrated into an
increasing number of areas of value creation. The political agenda
oriented itself toward the vision of "creative industries," a concept
developed in 1997 by the newly elected British government under Tony
Blair. A Creative Industries Task Force was established right away, and
its first step was to identify "those activities which have their
origins in individual creativity, skill and talent and which have the
potential for wealth and job creation through the generation and
exploitation of intellectual
property."[^20^](#c1-note-0020){#c1-note-0020a} Like Fritz Machlup at
the beginning of the 1960s, the task force brought together existing
areas of activity into a new category. Such activities included
advertising, computer games, architecture, music, arts and antique
markets, publishing, design, software and computer services, fashion,
television and radio, and film and video. The latter were elevated to
matters of political importance on account of their potential to create
wealth and jobs. Not least because of this clever presentation of
categories -- no distinction was made between the BBC, an almighty
public-service provider, and fledgling companies in precarious
circumstances -- it was possible to proclaim not only that the creative
industries were contributing a relevant portion of the nation\'s
economic output, but also that this sector was growing at an especially
fast rate. It was reported that, in London, the creative industries were
already responsible for one out of every five new jobs. When compared
with traditional terms of employment as regards income, benefits, and
prospects for advancement, however, many of these positions entailed a
considerable downgrade for the employees in question (who were now
treated as independent contractors). This fact was either ignored or
explicitly interpreted as a sign of the sector\'s particular
dynamism.[^21^](#c1-note-0021){#c1-note-0021a} Around the turn of the
new millennium, the idea that individual creativity plays a central role
in the economy was given further traction by []{#Page_20
type="pagebreak" title="20"}the sociologist and consultant Richard
Florida, who argued that creativity was essential to the future of
cities and even announced the rise of the "creative class." As to the
preconditions that have to be met in order to tap into this source of
wealth, he devised a simple formula that would be easy for municipal
bureaucrats to understand: "technology, tolerance and talent." Talent,
as defined by Florida, is based on individual creativity and education
and manifests itself in the ability to generate new jobs. He was thus
able to declare talent a central element of economic
growth.[^22^](#c1-note-0022){#c1-note-0022a} In order to "unleash" these
resources, what we need in addition to technology is, above all,
tolerance; that is, "an open culture -- one that does not discriminate,
does not force people into boxes, allows us to be ourselves, and
validates various forms of family and of human
identity."[^23^](#c1-note-0023){#c1-note-0023a}
The idea that a public welfare state should ensure the social security
of individuals was considered obsolete. Collective institutions, which
could have provided a degree of stability for people\'s lifestyles, were
dismissed or regarded as bureaucratic obstacles. The more or less
directly evoked role model for all of this was the individual artist,
who was understood as an individual entrepreneur, a sort of genius
suitable for the masses. For Florida, a central problem was that,
according to his own calculations, only about a third of the people
living in North American and European cities were working in the
"creative sector," while the innate creativity of everyone else was
going to waste. Even today, the term "creative industry," along with the
assumption that the internet will provide increased opportunities,
serves to legitimize the effort to restructure all areas of the economy
according to the needs of the knowledge economy and to privilege the
network over the institution. In times of social cutbacks and empty
public purses, especially in municipalities, this message was warmly
received. One mayor, who as the first openly gay top politician in
Germany exemplified tolerance for diverse lifestyles, even adopted the
slogan "poor but sexy" for his city. Everyone was supposed to exploit
his or her own creativity to discover new niches and opportunities for
monetization -- a magic formula that was supposed to bring about a new
urban revival. Today there is hardly a city in Europe that does not
issue a report about its creative economy, []{#Page_21 type="pagebreak"
title="21"}and nearly all of these reports cite, directly or indirectly,
Richard Florida.
As already seen in the context of the knowledge economy, so too in the
case of creative industries do measurable social change, wishful
thinking, and political agendas blend together in such a way that it is
impossible to identify a single cause for the developments taking place.
The consequences, however, are significant. Over the last two
generations, the demands of the labor market have fundamentally changed.
Higher education and the ability to acquire new knowledge independently
are now, to an increasing extent, required and expected as
qualifications and personal attributes. The desired or enforced ability
to be flexible at work, the widespread cooperation across institutions,
the uprooted nature of labor, and the erosion of collective models for
social security have displaced many activities, which once took place
within clearly defined institutional or personal limits, into a new
interstitial space that is neither private nor public in the classical
sense. This is the space of networks, communities, and informal
cooperation -- the space of sharing and exchange that has since been
enabled by the emergence of ubiquitous digital communication. It allows
an increasing number of people, whether willingly or otherwise, to
envision themselves as active producers of information, knowledge,
capability, and meaning. And because it is associated in various ways
with the space of market-based exchange and with the bourgeois political
sphere, it has lasting effects on both. This interstitial space becomes
all the more important as fewer people are willing or able to rely on
traditional institutions for their economic security. For, within it,
personal and digital-based networks can and must be developed as
alternatives, regardless of whether they prove sustainable for the long
term. As a result, more and more actors, each with their own claims to
meaning, have been rushing away from the private personal sphere into
this new interstitial space. By now, this has become such a normal
practice that whoever is *not* active in this ever-expanding
interstitial space, which is rapidly becoming the main social sphere --
whoever, that is, lacks a publicly visible profile on social mass media
like Facebook, or does not number among those producing information and
meaning and is thus so inconspicuous online as []{#Page_22
type="pagebreak" title="22"}to yield no search results -- now stands out
in a negative light (or, in far fewer cases, acquires a certain prestige
on account of this very absence).
:::
::: {.section}
### The erosion of heteronormativity {#c1-sec-0004}
In this (sometimes more, sometimes less) public space for the continuous
production of social meaning (and its exploitation), there is no
question that the professional middle class is
over-represented.[^24^](#c1-note-0024){#c1-note-0024a} It would be
short-sighted, however, to reduce those seeking autonomy and the
recognition of individuality and social diversity to the role of poster
children for the new spirit of
capitalism.[^25^](#c1-note-0025){#c1-note-0025a} The new social
movements, for instance, initiated a social shift that has allowed an
increasing number of people to demand, if nothing else, the right to
participate in social life in a self-determined manner; that is,
according to their own standards and values.
Especially effective was the critique of patriarchal and heteronormative
power relations, modes of conduct, and
identities.[^26^](#c1-note-0026){#c1-note-0026a} In the context of the
political upheavals at the end of the 1960s, the new women\'s and gay
movements developed into influential actors. Their greatest achievement
was to establish alternative cultural forms, lifestyles, and strategies
of action in or around the mainstream of society. How this was done can
be demonstrated by tracing, for example, the development of the gay
movement in West Germany.
In the fall of 1969, the liberalization of Paragraph 175 of the German
Criminal Code came into effect. From then on, sexual activity between
adult men was no longer punishable by law (women were not mentioned in
this context). For the first time, a man could now express himself as a
homosexual outside of semi-private space without immediately being
exposed to the risk of criminal prosecution. This was a necessary
precondition for the ability to defend one\'s own rights. As early as
1971, the struggle for the recognition of gay life experiences reached
the broader public when Rosa von Praunheim\'s film *It Is Not the
Homosexual Who Is Perverse, but the Society in Which He Lives* was
screened at the Berlin International Film Festival and then, shortly
thereafter, broadcast on public television in North Rhine-Westphalia.
The film, which is firmly situated in the agitprop tradition,
[]{#Page_23 type="pagebreak" title="23"}follows a young provincial man
through the various milieus of Berlin\'s gay subcultures: from a
monogamous relationship to nightclubs and public bathrooms until, at the
end, he is enlightened by a political group of men who explain that it
is not possible to lead a free life in a niche, as his own emancipation
can only be achieved by a transformation of society as a whole. The film
closes with a not-so-subtle call to action: "Out of the closets, into
the streets!" Von Praunheim understood this emancipation to be a process
that encompassed all areas of life and had to be carried out in public;
it could only achieve success, moreover, in solidarity with other
freedom movements such as the Black Panthers in the United States and
the new women\'s movement. The goal, according to this film, is to
articulate one\'s own identity as a specific and differentiated identity
with its own experiences, values, and reference systems, and to anchor
this identity within a society that not only tolerates it but also
recognizes it as having equal validity.
At first, however, the film triggered vehement controversies, even
within the gay scene. The objection was that it attacked the gay
subculture, which was not yet prepared to defend itself publicly against
discrimination. Despite or (more likely) because of these controversies,
more than 50 groups of gay activists soon formed in Germany. Such
groups, largely composed of left-wing alternative students, included,
for instance, the Homosexuelle Aktion Westberlin (HAW) and the Rote
Zelle Schwul (RotZSchwul) in Frankfurt am
Main.[^27^](#c1-note-0027){#c1-note-0027a} One focus of their activities
was to have Paragraph 175 struck entirely from the legal code (which was
not achieved until 1994). This cause was framed within a general
struggle to overcome patriarchy and capitalism. At the earliest gay
demonstrations in Germany, which took place in Münster in April 1972,
protesters rallied behind the following slogan: "Brothers and sisters,
gay or not, it is our duty to fight capitalism." This was understood as
a necessary subordination to the greater struggle against what was known
in the terminology of left-wing radical groups as the "main
contradiction" of capitalism (that between capital and labor), and it
led to strident differences within the gay movement. The dispute
escalated during the next year. After the so-called *Tuntenstreit*, or
"Battle of the Queens," which was []{#Page_24 type="pagebreak"
title="24"}initiated by activists from Italy and France who had appeared
in drag at the closing ceremony of the HAW\'s Spring Meeting in West
Berlin, the gay movement was divided, or at least moving in a new
direction. At the heart of the matter were the following questions: "Is
there an inherent (many speak of an autonomous) position that gays hold
with respect to the issue of homosexuality? Or can a position on
homosexuality only be derived in association with the traditional
workers\' movement?"[^28^](#c1-note-0028){#c1-note-0028a} In other
words, was discrimination against homosexuality part of the social
divide caused by capitalism (that is, one of its "ancillary
contradictions") and thus only to be overcome by overcoming capitalism
itself, or was it something unrelated to the "essence" of capitalism, an
independent conflict requiring different strategies and methods? This
conflict could never be fully resolved, but the second position, which
was more interested in overcoming legal, social, and cultural
discrimination than in struggling against economic exploitation, and
which focused specifically on the social liberation of gays, proved to
be far more dynamic in the long term. This was not least because both
the old and new left were themselves not free of homophobia and because
the entire radical student movement of the 1970s fell into crisis.
Over the course of the 1970s and 1980s, "aesthetic self-empowerment" was
realized through the efforts of artistic and (increasingly) commercial
producers of images, texts, and
sounds.[^29^](#c1-note-0029){#c1-note-0029a} Activists, artists, and
intellectuals developed a language with which they could speak
assertively in public about topics that had previously been taboo.
Inspired by the expression "gay pride," which originated in the United
States, they began to use the term *schwul* ("gay"), which until then
had possessed negative connotations, with growing confidence. They
founded numerous gay and lesbian cultural initiatives, theaters,
publishing houses, magazines, bookstores, meeting places, and other
associations in order to counter the misleading or (in their eyes)
outright false representations of the mass media with their own
multifarious media productions. In doing so, they typically followed a
dual strategy: on the one hand, they wanted to create a space for the
members of the movement in which it would be possible to formulate and
live different identities; on the other hand, they were fighting to be
accepted by society at large. While []{#Page_25 type="pagebreak"
title="25"}a broader and broader spectrum of gay positions, experiences,
and aesthetics was becoming visible to the public, the connection to
left-wing radical contexts became weaker. Founded as early as 1974, and
likewise in West Berlin, the General Homosexual Working Group
(Allgemeine Homosexuelle Arbeitsgemeinschaft) sought to integrate gay
politics into mainstream society by defining the latter -- on the basis
of bourgeois, individual rights -- as a "politics of
anti-discrimination." These efforts achieved a milestone in 1980 when,
in the run-up to the parliamentary election, a podium discussion was
held with representatives of all major political parties on the topic of
the law governing sexual offences. The discussion took place in the
Beethovenhalle in Bonn, which was the largest venue for political events
in the former capital. Several participants considered the event to be a
"disaster,"[^30^](#c1-note-0030){#c1-note-0030a} for it revived a number
of internal conflicts (not least that between revolutionary and
integrative positions). Yet the fact remains that representatives were
present from every political party, and this alone was indicative of an
unprecedented amount of public awareness for those demanding equal
rights.
The struggle against discrimination and for social recognition reached
an entirely new level of urgency with the outbreak of HIV/AIDS. In 1983,
the magazine *Der Spiegel* devoted its first cover story to the disease,
thus bringing it to the awareness of the broader public. In the same
year, the non-profit organization Deutsche Aids-Hilfe was founded to
prevent further cases of discrimination, for *Der Spiegel* was not the
only publication at the time to refer to AIDS as a "homosexual
epidemic."[^31^](#c1-note-0031){#c1-note-0031a} The struggle against
HIV/AIDS required a comprehensive mobilization. Funding had to be raised
in order to deal with the social repercussions of the epidemic, to teach
people about safe sexual practices for everyone and to direct research
toward discovering causes and developing potential cures. The immediate
threat that AIDS represented, especially while so little was known about
the illness and its treatment remained a distant hope, created an
impetus for mobilization that led to alliances between the gay movement,
the healthcare system, and public authorities. Thus, the AIDS Inquiry
Committee, sponsored by the conservative Christian Democratic Union,
concluded in 1988 that, in the fight against the illness, "the
homosexual subculture is []{#Page_26 type="pagebreak"
title="26"}especially important. This informal structure should
therefore neither be impeded nor repressed but rather, on the contrary,
recognized and supported."[^32^](#c1-note-0032){#c1-note-0032a} The AIDS
crisis proved to be a catalyst for advancing the integration of gays
into society and for expanding what could be regarded as acceptable
lifestyles, opinions, and cultural practices. As a consequence,
homosexuals began to appear more frequently in the media, though their
presence would never match that of heterosexuals. As of 1985, the
television show *Lindenstraße* featured an openly gay protagonist, and
the first kiss between men was aired in 1987. The episode still provoked
a storm of protest -- Bayerische Rundfunk refused to broadcast it a
second time -- but this was already a rearguard action and the
integration of gays (and lesbians) into the social mainstream continued.
In 1993, the first gay and lesbian city festival took place in Berlin,
and the first Rainbow Parade was held in Vienna in 1996. In 2002, the
Cologne Pride Day involved 1.2 million participants and attendees, thus
surpassing for the first time the attendance at the traditional Rose
Monday parade. By the end of the 1990s, the sociologist Rüdiger Lautmann
was already prepared to maintain: "To be homosexual has become
increasingly normalized, even if homophobia lives on in the depths of
the collective disposition."[^33^](#c1-note-0033){#c1-note-0033a} This
normalization was also reflected in a study published by the Ministry of
Justice in the year 2000, which stressed "the similarity between
homosexual and heterosexual relationships" and, on this basis, made an
argument against discrimination.[^34^](#c1-note-0034){#c1-note-0034a}
Around the year 2000, however, the classical gay movement had already
passed its peak. A profound transformation had begun to take place in
the middle of the 1990s. It lost its character as a new social movement
(in the style of the 1970s) and began to splinter inwardly and
outwardly. One could say that it transformed from a mass movement into a
multitude of variously networked communities. The clearest sign of this
transformation is the abbreviation "LGBT" (lesbian, gay, bisexual, and
transgender), which, since the mid-1990s, has represented the internal
heterogeneity of the movement as it has shifted toward becoming a
network.[^35^](#c1-note-0035){#c1-note-0035a} At this point, the more
radical actors were already speaking against the normalization of
homosexuality. Queer theory, for example, was calling into question the
"essentialist" definition of gender []{#Page_27 type="pagebreak"
title="27"}-- that is, any definition reducing it to an immutable
essence -- with respect to both its physical dimension (sex) and its
social and cultural dimension (gender
proper).[^36^](#c1-note-0036){#c1-note-0036a} It thus opened up a space
for the articulation of experiences, self-descriptions, and lifestyles
that, on every level, are located beyond the classical attributions of
men and women. A new generation of intellectuals, activists, and artists
took the stage and developed -- yet again through acts of aesthetic
self-empowerment -- a language that enabled them to import, with
confidence, different self-definitions into the public sphere. An
example of this is the adoption of inclusive plural forms in German
(*Aktivist\_innen* "activists," *Künstler\_innen* "artists"), which draw
attention to the gaps and possibilities between male and female
identities that are also expressed in the language itself. Just as with
the terms "gay" or *schwul* some 30 years before, in this case, too, an
important element was the confident and public adoption and semantic
conversion of a formerly insulting word ("queer") by the very people and
communities against whom it used to be
directed.[^37^](#c1-note-0037){#c1-note-0037a} Likewise observable in
these developments was the simultaneity of social (amateur) and
artistic/scientific (professional) cultural production. The goal,
however, was less to produce a clear antithesis than it was to oppose
rigid attributions by underscoring mutability, hybridity, and
uniqueness. Both the scope of what could be expressed in public and the
circle of potential speakers expanded yet again. And, at least to some
extent, the drag queen Conchita Wurst popularized complex gender
constructions that went beyond the simple woman/man dualism. All of that
said, the assertion by Rüdiger Lautmann quoted above -- "homophobia
lives on in the depths of the collective disposition" -- continued to
hold true.
If the gay movement is representative of the social liberation of the
1970s and 1980s, then it is possible to regard its transformation into
the LGBT movement during the 1990s -- with its multiplicity and fluidity
of identity models and its stress on mutability and hybridity -- as a
sign of the reinvention of this project within the context of an
increasingly dominant digital condition. With this transformation,
however, the diversification and fluidification of cultural practices
and social roles have not yet come to an end. Ways of life that were
initially subcultural and facing existential pressure []{#Page_28
type="pagebreak" title="28"}are gradually entering the mainstream. They
are expanding the range of readily available models of identity for
anyone who might be interested, be it with respect to family forms
(e.g., patchwork families, adoption by same-sex couples), diets (e.g.,
vegetarianism and veganism), healthcare (e.g., anti-vaccination), or
other principles of life and belief. All of them are seeking public
recognition for a new frame of reference for social meaning that has
originated from their own activity. This is necessarily a process
characterized by conflicts and various degrees of resistance, including
right-wing populism that seeks to defend "traditional values," but many
of these movements will ultimately succeed in providing more people with
the opportunity to speak in public, thus broadening the palette of
themes that are considered to be important and legitimate.
:::
::: {.section}
### Beyond center and periphery {#c1-sec-0005}
In order to reach a better understanding of the complexity involved in
the expanding social basis of cultural production, it is necessary to
shift yet again to a different level. For, just as it would be myopic to
examine the multiplication of cultural producers only in terms of
professional knowledge workers from the middle class, it would likewise
be insufficient to situate this multiplication exclusively in the
centers of the West. The entire system of categories that justified the
differentiation between the cultural "center" and the cultural
"periphery" has begun to falter. This complex and multilayered process
has been formulated and analyzed by the theory of "post-colonialism."
Long before digital media made the challenge of cultural multiplicity a
quotidian issue in the West, proponents of this theory had developed
languages and terminologies for negotiating different positions without
needing to impose a hierarchical order.
Since the 1970s, the theoretical current of post-colonialism has been
examining the cultural and epistemic dimensions of colonialism that,
even after its end as a territorial system, have remained responsible
for the continuation of dependent relations and power differentials. For
my purposes -- which are to develop a European perspective on the
factors ensuring that more and more people are able to participate in
cultural []{#Page_29 type="pagebreak" title="29"}production -- two
points are especially relevant because their effects reverberate in
Europe itself. First is the deconstruction of the categories "West" (in
the sense of the center) and "East" (in the sense of the periphery). And
second is the focus on hybridity as a specific way for non-Western
actors to deal with the dominant cultures of former colonial powers,
which have continued to determine significant portions of globalized
culture. The terms "West" and "East," "center" and "periphery," do not
simply describe existing conditions; rather, they are categories that
contribute, in an important way, to the creation of the very conditions
that they presume to describe. This may sound somewhat circular, but it
is precisely from this circularity that such cultural classifications
derive their strength. The world that they illuminate is immersed in
their own light. The category "East" -- or, to use the term of the
literary theorist Edward Said,
"orientalism"[^38^](#c1-note-0038){#c1-note-0038a} -- is a system of
representation that pervades Western thinking. Within this system,
Europe or the West (as the center) and the East (as the periphery)
represent asymmetrical and antithetical concepts. This construction
achieves a dual effect. As a self-description, on the one hand, it
contributes to the formation of our own identity, for Europeans
attribute to themselves and to their continent such features as
"rationality," "order," and "progress," while on the other hand
identifying the alternative with "superstition," "chaos," or
"stagnation." The East, moreover, is used as an exotic projection screen
for our own suppressed desires. According to Said, a representational
system of this sort can only take effect if it becomes "hegemonic"; that
is, if it is perceived as self-evident and no longer as an act of
attribution but rather as one of description, even and precisely by
those against whom the system discriminates. Said\'s accomplishment is
to have worked out how far-reaching this system was and, in many areas,
it remains so today. It extended (and extends) from scientific
disciplines, whose researchers discussed (until the 1980s) the theory of
"oriental despotism,"[^39^](#c1-note-0039){#c1-note-0039a} to literature
and art -- the motif of the harem was especially popular, particularly
in paintings of the late nineteenth
century[^40^](#c1-note-0040){#c1-note-0040a} -- all the way to everyday
culture, where, as of 1913 in the United States, the cigarette brand
Camel (introduced to compete with the then-leading brand, Fatima) was
meant to evoke the []{#Page_30 type="pagebreak" title="30"}mystique and
sensuality of the Orient.[^41^](#c1-note-0041){#c1-note-0041a} This
system of representation, however, was more than a means of describing
oneself and others; it also served to legitimize the allocation of all
knowledge and agency on to one side, that of the West. Such an order was
not restricted to culture; it also created and legitimized a sense of
domination for colonial projects.[^42^](#c1-note-0042){#c1-note-0042a}
This cultural legitimation, as Said points out, also persists after the
end of formal colonial domination and continues to marginalize the
postcolonial subjects. As before, they are unable to speak for
themselves and therefore remain in the dependent periphery, which is
defined by their subordinate position in relation to the center. Said
directed the focus of critique to this arrangement of center and
periphery, which he saw as being (re)produced and legitimized on the
cultural level. From this arose the demand that everyone should have the
right to speak, to place him- or herself in the center. To achieve this,
it was necessary first of all to develop a language -- indeed, a
cultural landscape -- that can manage without a hegemonic center and is
thus oriented toward multiplicity instead of
uniformity.[^43^](#c1-note-0043){#c1-note-0043a}
A somewhat different approach has been taken by the literary theorist
Homi K. Bhabha. He proceeds from the idea that the colonized never fully
passively adopt the culture of the colonialists -- the "English book,"
as he calls it. Their previous culture is never simply wiped out and
replaced by another. What always and necessarily occurs is rather a
process of hybridization. This concept, according to Bhabha,
::: {.extract}
suggests that all of culture is constructed around negotiations and
conflicts. Every cultural practice involves an attempt -- sometimes
good, sometimes bad -- to establish authority. Even classical works of
art, such as a painting by Brueghel or a composition by Beethoven, are
concerned with the establishment of cultural authority. Now, this poses
the following question: How does one function as a negotiator when
one\'s own sense of agency is limited, for instance, on account of being
excluded or oppressed? I think that, even in the role of the underdog,
there are opportunities to upend the imposed cultural authorities -- to
accept some aspects while rejecting others. It is in this way that
symbols of authority are hybridized and made into something of one\'s
own. For me, hybridization is not simply a mixture but rather a
[]{#Page_31 type="pagebreak" title="31"}strategic and selective
appropriation of meanings; it is a way to create space for negotiators
whose freedom and equality are
endangered.[^44^](#c1-note-0044){#c1-note-0044a}
:::
Hybridization is thus a cultural strategy for evading marginality that
is imposed from the outside: subjects, who from the dominant perspective
are incapable of doing so, appropriate certain aspects of culture for
themselves and transform them into something else. What is decisive is
that this hybrid, created by means of active and unauthorized
appropriation, opposes the dominant version and the resulting speech is
thus legitimized from another -- that is, from one\'s own -- position.
In this way, a cultural engagement is set under way and the superiority
of one meaning or another is called into question. Who has the right to
determine how and why a relationship with others should be entered,
which resources should be appropriated from them, and how these
resources should be used? At the heart of the matter lie the abilities
of speech and interpretation; these can be seized in order to create
space for a "cultural hybridity that entertains difference without an
assumed or imposed hierarchy."[^45^](#c1-note-0045){#c1-note-0045a}
At issue is thus a strategy for breaking down hegemonic cultural
conditions, which distribute agency in a highly uneven manner, and for
turning one\'s own cultural production -- which has been dismissed by
cultural authorities as flawed, misconceived, or outright ignorant --
into something negotiable and independently valuable. Bhabha is thus
interested in fissures, differences, diversity, multiplicity, and
processes of negotiation that generate something like shared meaning --
culture, as he defines it -- instead of conceiving of it as something
that precedes these processes and is threatened by them. Accordingly, he
proceeds not from the idea of unity, which is threatened whenever
"others" are empowered to speak and needs to be preserved, but rather
from the irreducible multiplicity that, through laborious processes, can
be brought into temporary and limited consensus. Bhabha\'s vision of
culture is one without immutable authorities, interpretations, and
truths. In theory, everything can be brought to the table. This is not a
situation in which anything goes, yet the central meaning of
negotiation, the contextuality of consensus, and the mutability of every
frame of reference []{#Page_32 type="pagebreak" title="32"}-- none of
which can be shared equally by everyone -- are always potentially
negotiable.
Post-colonialism draws attention to the "disruptive power of the
excluded-included third," which becomes especially virulent when it
"emerges in the middle of semantic
structures."[^46^](#c1-note-0046){#c1-note-0046a} The recognition of
this power reveals the increasing cultural independence of those
formerly colonized, and it also transforms the cultural self-perception
of the West, for, even in Western nations that were not significant
colonial powers, there are multifaceted tensions between dominant
cultures and those who are on the defensive against discrimination and
attributions by others. Instead of relying on the old recipe of
integration through assimilation (that is, the dissolution of the
"other"), the right to self-determined difference is being called for
more emphatically. In such a manner, collective identities, such as
national identities, are freed from their questionable appeals to
cultural homogeneity and essentiality, and reconceived in terms of the
experience of immanent difference. Instead of one binding and
unnegotiable frame of reference for everyone, which hierarchizes
individual positions and makes them appear unified, a new order without
such limitations needs to be established. Ultimately, the aim is to
provide nothing less than an "alternative reading of
modernity,"[^47^](#c1-note-0047){#c1-note-0047a} which influences both
the construction of the past and the modalities of the future. For
European culture in particular, such a project is an immense challenge.
Of course, these demands do not derive their everyday relevance
primarily from theory but rather from the experiences of
(de)colonization, migration, and globalization. Multifaceted as it is,
however, the theory does provide forms and languages for articulating
these phenomena, legitimizing new positions in public debates, and
attacking persistent mechanisms of cultural marginalization. It helps to
empower broader societal groups to become actively involved in cultural
processes, namely people, such as migrants and their children, whose
identity and experience are essentially shaped by non-Western cultures.
The latter have been giving voice to their experiences more frequently
and with greater confidence in all areas of public life, be it in
politics, literature, music, or
art.[^48^](#c1-note-0048){#c1-note-0048a} In Germany, for instance, the
films by Fatih Akin (*Head-On* from 2004 and *Soul Kitchen* from 2009,
to []{#Page_33 type="pagebreak" title="33"}name just two), in which the
experience of immigration is represented as part of the German
experience, have reached a wide public audience. In 2002, the group
Kanak Attak organized a series of conferences with the telling motto *no
integración*, and these did much to introduce postcolonial positions to
the debates taking place in German-speaking
countries.[^49^](#c1-note-0049){#c1-note-0049a} For a long time,
politicians with "migration backgrounds" were considered to be competent
in only one area, namely integration policy. This has since changed,
though not entirely. In 2008, for instance, Cem Özdemir was elected
co-chair of the Green Party and thus shares responsibility for all of
its political positions. Developments of this sort have been enabled
(and strengthened) by a shift in society\'s self-perception. In 2014,
Cemile Giousouf, the integration commissioner for the conservative
CDU/CSU alliance in the German Parliament, was able to make the
following statement without inciting any controversy: "Over the past few
years, Germany has become a modern land of
immigration."[^50^](#c1-note-0050){#c1-note-0050a} A remarkable
proclamation. Not ten years earlier, her party colleague Norbert Lammert
had expressed, in his function as parliamentary president, interest in
reviving the debate about the term "leading culture." The increasingly
well-educated migrants of the first, second, or third generation no
longer accept the choice of being either marginalized as an exotic
representative of the "other" or entirely assimilated. Rather, they are
insisting on being able to introduce their specific experience as a
constitutive contribution to the formation of the present -- in
association and in conflict with other contributions, but at the same
level and with the same legitimacy. It is no surprise that various forms
of discrimination and violence against "foreigners" not only continue
in everyday life but have also been increasing in reaction to this new
situation. Ultimately, established claims to power are being called into
question.
To summarize, at least three secular historical tendencies or movements,
some of which can be traced back to the late nineteenth century but each
of which gained considerable momentum during the last third of the
twentieth (the spread of the knowledge economy, the erosion of
heteronormativity, and the focus of post-colonialism on cultural
hybridity), have greatly expanded the sphere of those who actively
negotiate []{#Page_34 type="pagebreak" title="34"}social meaning. In
large part, the patterns and cultural foundations of these processes
developed long before the internet. Through the use of the internet, and
through the experiences of dealing with it, they have encroached upon
far greater portions of all societies.
:::
:::
::: {.section}
The Culturalization of the World {#c1-sec-0006}
--------------------------------
The number of participants in cultural processes, however, is not the
only thing that has increased. Parallel to that development, the field
of the cultural has expanded as well -- that is, those areas of life
that are not simply characterized by unalterable necessities, but rather
contain or generate competing options and thus require conscious
decisions.
The term "culturalization of the economy" refers to the central position
of knowledge-based, meaning-based, and affect-oriented processes in the
creation of value. With the emergence of consumption as the driving
force behind the production of goods and the concomitant necessity of
having not only to satisfy existing demands but also to create new ones,
the cultural and affective dimensions of the economy began to gain
significance. I have already discussed the beginnings of product
staging, advertising, and public relations. In addition to all of the
continuities that remain with us from that time, it is also possible to
point out a number of major changes that consumer society has undergone
since the late 1960s. These changes can be delineated by examining the
greater role played by design, which has been called the "core
discipline of the creative
economy."[^51^](#c1-note-0051){#c1-note-0051a}
As a field of its own, design originated alongside industrialization,
when, in collaborative processes, the activities of planning and
designing were separated from those of carrying out
production.[^52^](#c1-note-0052){#c1-note-0052a} It was not until the
modern era that designers consciously endeavored to seek new forms for
the logic inherent to mass production. With the aim of economic
efficiency, they intended their designs to optimize the clearly defined
functions of anonymous and endlessly reproducible objects. At the end of
the nineteenth century, the architect Louis Sullivan, whose buildings
still distinguish the skyline of Chicago, condensed this new attitude
into the famous axiom []{#Page_35 type="pagebreak" title="35"}"form
follows function." Mies van der Rohe, working as an architect in Chicago
in the middle of the twentieth century, supplemented this with a pithy
and famous formulation of his own: "less is more." The rationality of
design, in the sense of isolating and improving specific functions, and
the economical use of resources were of chief importance to modern
(industrial) designers. Even the ten design principles of Dieter Rams,
who led the design division of the consumer products company Braun from
1965 to 1991 -- one of the main sources of inspiration for Jonathan Ive,
Apple\'s chief design officer -- aimed to make products "usable,"
"understandable," "honest," and "long-lasting." "Good design," according
to his guiding principle, "is as little design as
possible."[^53^](#c1-note-0053){#c1-note-0053a} This orientation toward
the technical and functional promised to solve problems for everyone in
a long-term and binding manner, for the inherent material and design
qualities of an object were supposed to make it independent from
changing times and from the tastes of consumers.
::: {.section}
### Beyond the object {#c1-sec-0007}
At the end of the 1960s, a new generation of designers rebelled against
this industrial and instrumental rationality, which was now felt to be
authoritarian, soulless, and reductionist. In the works associated with
"anti-design" or "radical design," the objectives of the discipline were
redefined and a new formal language was developed. In the place of
technical and functional optimization, recombination -- ecological
recycling or the postmodern interplay of forms -- emerged as a design
method and aesthetic strategy. Moreover, the aspiration of design
shifted from the individual object to its entire social and material
environment. The processes of design and production, which had been
closed off from one another and restricted to specialists, were opened
up precisely to encourage the participation of non-designers, be it
through interdisciplinary cooperation with other types of professions or
through the empowerment of laymen. The objectives of design were
radically expanded: rather than ending with the completion of an
individual product, it was now supposed to engage with society. In the
sense of cybernetics, this was regarded as a "system," controlled by
feedback processes, []{#Page_36 type="pagebreak" title="36"}which
connected social, technical, and biological dimensions to one
another.[^54^](#c1-note-0054){#c1-note-0054a} Design, according to this
new approach, was meant to be a "socially significant
activity."[^55^](#c1-note-0055){#c1-note-0055a}
Embedded in the social movements of the 1960s and 1970s, this new
generation of designers was curious about the social and political
potential of their discipline, and about possibilities for promoting
flexibility and autonomy instead of rigid industrial efficiency. Design
was no longer expected to solve problems once and for all, for such an
idea did not correspond to the self-perception of an open and mutable
society. Rather, it was expected to offer better opportunities for
enabling people to react to continuously changing conditions. A radical
proposal was developed by the Italian designer Enzo Mari, who in 1974
published his handbook *Autoprogettazione* (Self-Design). It contained
19 simple designs with which people could make, on their own,
aesthetically and functionally sophisticated furniture out of pre-cut
pieces of wood. In this case, the designs themselves were less important
than the critique of conventional design as elitist and of consumer
society as alienated and wasteful. Mari\'s aim was to reconceive the
relations among designers, the manufacturing industry, and users.
Increasingly, design came to be understood as a holistic and open
process. Victor Papanek, the founder of ecological design, took things a
step further. For him, design was "basic to all human activity. The
planning and patterning of any act towards a desired, foreseeable end
constitutes the design process. Any attempt to separate design, to make
it a thing-by-itself, works counter to the inherent value of design as
the primary underlying matrix of
life."[^56^](#c1-note-0056){#c1-note-0056a}
Potentially all aspects of life could therefore fall under the purview
of design. This came about from the desire to oppose industrialism,
which was blind to its catastrophic social and ecological consequences,
with a new and comprehensive manner of seeing and acting that was
unrestricted by economics.
Toward the end of the 1970s, this expanded notion of design owed less
and less to emancipatory social movements, and its socio-political goals
began to fall by the wayside. Three fundamental patterns survived,
however, which go beyond design and remain characteristic of the
culturalization []{#Page_37 type="pagebreak" title="37"}of the economy:
the discovery of the public as emancipated users and active
participants; the use of appropriation, transformation, and
recombination as methods for creating ever-new aesthetic
differentiations; and, finally, the intention of shaping the lifeworld
of the user.[^57^](#c1-note-0057){#c1-note-0057a}
As these patterns became depoliticized and commercialized, the focus of
designing the "lifeworld" shifted more and more toward designing the
"experiential world." By the end of the 1990s, this had become so
normalized that even management consultants could assert that
"\[e\]xperiences represent an existing but previously unarticulated
*genre of economic output*."[^58^](#c1-note-0058){#c1-note-0058a} It was
possible to define the dimensions of the experiential world in various
ways. For instance, it could be clearly delimited and product-oriented,
like the flagship stores introduced by Nike in 1990, which, with their
elaborate displays, were meant to turn shopping into an experience. This
experience, as the company\'s executives hoped, radiated outward and
influenced how the brand was perceived as a whole. The experiential
world could also, however, be conceived in somewhat broader terms, for
instance by designing entire institutions around the idea of creating a
more attractive work environment and thereby increasing the commitment
of employees. This approach is widespread today in creative industries
and has become popularized through countless stories about ping-pong
tables, gourmet cafeterias, and massage rooms in certain offices. In
this case, the process of creativity is applied back to itself in order
to systematize and optimize a given workplace\'s basis of operation. The
development is comparable to the "invention of invention" that
characterized industrial research around the end of the nineteenth
century, though now the concept has been relocated to the field of
knowledge production.
Yet the "experiential world" can be expanded even further, for instance
when entire cities attempt to make themselves attractive to
international clientele and compete with others by building spectacular
museums or sporting arenas. Displays in cities, as well as a few other
central locations, are regularly constructed in order to produce a
particular experience. This also entails, however, that certain forms of
use that fail to fit the "urban
script"[^59^](#c1-note-0059){#c1-note-0059a} are pushed to the margins
or driven away.[^60^](#c1-note-0060){#c1-note-0060a} Thus, today, there
is hardly a single area of life to []{#Page_38 type="pagebreak"
title="38"}which the strategies and methods of design do not have
access, and this access occurs at all levels. For some time, design has
not been a purely visible matter, restricted to material objects; it
rather forms and controls all of the senses. Cities, for example, have
come to be understood increasingly as "sound spaces" and have
accordingly been reconfigured with the goal of modulating their various
noises.[^61^](#c1-note-0061){#c1-note-0061a} Yet design is no longer
just a matter of objects, processes, and experiences. By now, in the
context of reproductive medicine, it has even been applied to the
biological foundations of life ("designer babies"). I will revisit this
topic below.
:::
Of course, design is not the only field of culture that has imposed
itself over society as a whole. A similar development has occurred in
the field of advertising, which, since the 1970s, has been integrated
into many more physical and social spaces and by now has a broad range
of methods at its disposal. Advertising is no longer found simply on
billboards or in display windows. In the form of "guerilla marketing" or
"product placement," it has penetrated every space and occupied every
discourse -- by blending with political messages, for instance -- and
can now even be spread, as "viral marketing," by the addressees of the
advertisements themselves. Similar processes can be observed in the
fields of art, fashion, music, theater, and sports. This has taken place
perhaps most radically in the field of "gaming," which has drawn upon
technical progress in the most direct possible manner and, with the
spread of powerful computers and mobile applications, has left behind
the confines of the traditional playing field. In alternate reality
games, the realm of the virtual and fictitious has also been
transcended, as physical spaces have been overlaid with their various
scripts.[^62^](#c1-note-0062){#c1-note-0062a}
This list could be extended, but the basic trend is clear enough,
especially as the individual fields overlap and mutually influence one
another. They are blending into a single interdependent field for
generating social meaning in the form of economic activity. Moreover,
through digitalization and networking, many new opportunities have
arisen for large-scale involvement by the public in design processes.
Thanks []{#Page_39 type="pagebreak" title="39"}to new communication
technologies and flexible production processes, today\'s users can
personalize and create products to suit their wishes. Here, the spectrum
extends from tiny batches of creative-industrial products all the way to
global processes of "mass customization," in which factory-based mass
production is combined with personalization. One of the first
applications of this was introduced in 1999 when, through its website, a
sporting-goods company allowed customers to design certain elements of a
shoe by altering it within a set of guidelines. This was taken a step
further by the idea of "user-centered innovation," which relies on the
specific knowledge of users to enhance a product, with the additional
hope of discovering unintended applications and transforming these into
new areas of business.[^63^](#c1-note-0063){#c1-note-0063a} It has also
become possible for end users to take over the design process from the
beginning, which has become considerably easier with the advent of
specialized platforms for exchanging knowledge, alongside semi-automated
production tools such as mechanical mills and 3D printers.
Digitalization, which has allowed all content to be processed, and
networking, which has created an endless amount of content ("raw
material"), have turned appropriation and recombination into general
methods of cultural production.[^64^](#c1-note-0064){#c1-note-0064a}
This phenomenon will be examined more closely in the next chapter.
Both the involvement of users in the production process and the methods
of appropriation and recombination are extremely information-intensive
and communication-intensive. Without the corresponding technological
infrastructure, neither could be achieved efficiently or on a large
scale. This was evident in the 1970s, when such approaches never made it
beyond subcultures and conceptual studies. With today\'s search engines,
every single user can trawl through an amount of information that, just
a generation ago, would have been unmanageable even by professional
archivists. A broad array of communication platforms (together with
flexible production capacities and efficient logistics) not only weakens
the contradiction between mass fabrication and personalization; it also
allows users to network directly with one another in order to develop
specialized knowledge together and thus to enable themselves to
intervene directly in design processes, both as []{#Page_40
type="pagebreak" title="40"}willing participants in and as critics of
flexible global production processes.
:::
:::
::: {.section}
The Technologization of Culture {#c1-sec-0009}
-------------------------------
That society is dependent on complex information technologies in order
to organize its constitutive processes is, in itself, nothing new.
Rather, this began as early as the late nineteenth century. It is
directly correlated with the expansion and acceleration of the
circulation of goods, which came about through industrialization. As the
historian and sociologist James Beniger has noted, this led to a
"control crisis," for administrative control centers were faced with the
problem of losing sight of what was happening in their own factories,
with their suppliers, and in the important markets of the time.
Management was in a bind: decisions had to be made either on the basis
of insufficient information or too late. The existing administrative and
control mechanisms could no longer deal with the rapidly increasing
complexity and time-sensitive nature of extensively organized production
and distribution. The office became more important, and ever more people
were needed there to fulfill a growing number of functions. Yet this was
not enough for the crisis to subside. The old administrative methods,
which involved manual information processing, simply could no longer
keep up. The crisis reached its first dramatic peak in 1889 in the
United States, with the realization that the census data from the year
1880 had not yet been analyzed when the next census was already
scheduled to take place during the subsequent year. In the same year,
the Secretary of the Interior organized a conference to investigate
faster methods of data processing. Two methods were tested for making
manual labor more efficient, one of which had the potential to achieve
greater efficiency by means of novel data-processing machines. The
latter system emerged as the clear victor; developed by an engineer
named Hermann Hollerith, it mechanically processed and stored data on
punch cards. The idea was based on Hollerith\'s observations of the
coupling and decoupling of railroad cars, which he interpreted as
modular units that could be combined in any desired order. The punch
card transferred this approach to information []{#Page_41
type="pagebreak" title="41"}management. Data were no longer stored in
fixed, linear arrangements (tables and lists) but rather in small units
(the punch cards) that, like railroad cars, could be combined in any
given way. The increase in efficiency -- with respect to speed *and*
flexibility -- was enormous, and nearly a hundred of Hollerith\'s
machines were used by the Census
Bureau.[^65^](#c1-note-0065){#c1-note-0065a} This marked a turning point
in the history of information processing, with technical means no longer
being used exclusively to store data, but to process data as well. This
was the only way to avoid the impending crisis, ensuring that
bureaucratic management could maintain centralized control. Hollerith\'s
machines proved to be a resounding success and were implemented in many
more branches of government and corporate administration, where
data-intensive processes had increased so rapidly they could not have
been managed without such machines. This growth was accompanied by that
of Hollerith\'s Tabulating Machine Company, which he founded in 1896 and
which, after a number of mergers, was renamed in 1924 as the
International Business Machines Corporation (IBM). Throughout the
following decades, dependence on information-processing machines only
deepened. The growing number of social, commercial, and military
processes could only be managed by means of information technology. This
largely took place, however, outside of public view, namely in the
specialized divisions of large government and private organizations.
These were the only institutions in command of the necessary resources
for operating the complex technical infrastructure -- so-called
mainframe computers -- that was essential to automatic information
processing.
::: {.section}
### The independent media {#c1-sec-0010}
As with so much else, this situation began to change in the 1960s. Mass
media and information-processing technologies began to attract
criticism, even though all of the involved subcultures, media activists,
and hackers continued to act independently from one another until the
1990s. The freedom-oriented social movements of the 1960s began to view
the mass media as part of the political system against which they were
struggling. The connections among the economy, politics, and the media
were becoming more apparent, not []{#Page_42 type="pagebreak"
title="42"}least because many mass media companies, especially those in
Germany related to the Springer publishing house, were openly inimical
to these social movements. Critical theories arose that, borrowing
Louis Althusser\'s influential term, regarded the media as part of the
"ideological state apparatus"; that is, as one of the authorities whose
task is to influence people to accept social relations to such a degree
that the "repressive state apparatuses" (the police, the military, etc.)
form a constant background in everyday
life.[^66^](#c1-note-0066){#c1-note-0066a} Similarly influential,
Antonio Gramsci\'s theory of "cultural hegemony" emphasized the
condition in which the governed are manipulated to form a cultural
consensus with the ruling class; they accept the latter\'s
presuppositions (and the politics which are thus justified) even though,
by doing so, they are forced to suffer economic
disadvantages.[^67^](#c1-note-0067){#c1-note-0067a} Guy Debord and the
Situationists attributed to the media a central role in the new form of
rule known as "the spectacle," the glittery surfaces and superficial
manifestations of which served to conceal society\'s true
relations.[^68^](#c1-note-0068){#c1-note-0068a} In doing so, they
aligned themselves with the critique of the "culture industry," which
had been formulated by Max Horkheimer and Theodor W. Adorno at the
beginning of the 1940s and had become a widely discussed key text by the
1960s.
Their differences aside, these perspectives were united in that they no
longer understood the "public" as a neutral sphere, in which citizens
could inform themselves freely and form their opinions, but rather as
something that was created with specific intentions and consequences.
From this grew an interest in "counter-publics"; that is, in forums
where other actors could appear and negotiate theories of their own. The
mass media thus became an important instrument for organizing the
bourgeois--capitalist public, but they were also responsible for the
development of alternatives. Media, according to one of the core ideas
of these new approaches, are less a sphere in which an external reality
is depicted; rather, they are themselves a constitutive element of
reality.
:::
::: {.section}
### Media as lifeworlds {#c1-sec-0011}
Another branch of new media theories, that of Marshall McLuhan and the
Toronto School of Communication,[^69^](#c1-note-0069){#c1-note-0069a}
[]{#Page_43 type="pagebreak" title="43"}reached a similar conclusion on
different grounds. In 1964, McLuhan aroused a great deal of attention
with his slogan "the medium is the message." He maintained that every
medium of communication, by means of its media-specific characteristics,
directly affected the consciousness, self-perception, and worldview of
every individual.[^70^](#c1-note-0070){#c1-note-0070a} This, he
believed, happens independently of and in addition to whatever specific
message a medium might be conveying. From this perspective, reality does
not exist outside of media, given that media codetermine our personal
relation to and behavior in the world. For McLuhan and the Toronto
School, media were thus not channels for transporting content but rather
the all-encompassing environments -- galaxies -- in which we live.
Such ideas were circulating much earlier and were intensively developed
by artists, many of whom were beginning to experiment with new
electronic media. An important starting point in this regard was the
1963 exhibit *Exposition of Music -- Electronic Television* by the
Korean artist Nam June Paik, who was then collaborating with Karlheinz
Stockhausen in Düsseldorf. Among other things, Paik presented 12
television sets, the screens of which were "distorted" by magnets. Here,
however, "distorted" is a problematic term, for, as Paik explicitly
noted, the electronic images were "a beautiful slap in the face of
classic dualism in philosophy since the time of Plato. \[...\] Essence
AND existence, essentia AND existentia. In the case of the electron,
however, EXISTENTIA IS ESSENTIA."[^71^](#c1-note-0071){#c1-note-0071a}
Paik no longer understood the electronic image on the television screen
as a portrayal or representation of anything. Rather, it engendered in
the moment of its appearance an autonomous reality beyond and
independent of its representational function. A whole generation of
artists began to explore forms of existence in electronic media, which
they no longer understood as pure media of information. In his work
*Video Corridor* (1969--70), Bruce Nauman stacked two monitors at the
end of a corridor that was approximately 10 meters long but only 50
centimeters wide. On the lower monitor ran a video showing the empty
hallway. The upper monitor displayed an image captured by a camera
installed at the entrance of the hall, about 3 meters high. If the
viewer moved down the corridor toward the two []{#Page_44
type="pagebreak" title="44"}monitors, he or she would thus be recorded
by the latter camera. Yet the closer one came to the monitor, the
farther one would be from the camera, so that one\'s image on the
monitor would become smaller and smaller. Recorded from behind, viewers
would thus watch themselves walking away from themselves. Surveillance
by others, self-surveillance, recording, and disappearance were directly
and intuitively connected with one another and thematized as fundamental
issues of electronic media.
Toward the end of the 1960s, the easier availability and mobility of
analog electronic production technologies promoted the search for
counter-publics and the exploration of media as comprehensive
lifeworlds. In 1967, Sony introduced its first Portapak system: a
battery-powered, self-contained recording system -- consisting of a
camera, a cord, and a recorder -- with which it was possible to make
(black-and-white) video recordings outside of a studio. Although the
recording apparatus, which required additional devices for editing and
projection, was offered at the relatively expensive price of \$1,500
(which corresponds to about €8,000 today), it was still affordable for
interested groups. Compared with the situation of traditional film
cameras, these new cameras considerably lowered the initial hurdle for
media production, for video tapes were not only much cheaper than film
reels (and could be used for multiple recordings); they also made it
possible to view recorded material immediately and on location. This
enabled the production of works that were far more intuitive and
spontaneous than earlier ones. The 1970s saw the formation of many video
groups, media workshops, and other initiatives for the independent
production of electronic media. Through their own distribution,
festivals, and other channels, such groups created alternative public
spheres. The latter became especially prominent in the United States
where, at the end of the 1960s, the providers of cable networks were
legally obligated to establish public-access channels, on which citizens
were able to operate self-organized and non-commercial television
programs. This gave rise to a considerable public-access movement there,
which at one point extended across 4,000 cities and was responsible for
producing programs from and for these different
communities.[^72[]{#Page_45 type="pagebreak"
title="45"}^](#c1-note-0072){#c1-note-0072a}
What these initiatives shared in common, in Western Europe and the
United States, was their attempt to close the gap between the
consumption and production of media, to activate the public, and at
least in part to experiment with the media themselves. Non-professional
producers were empowered with the ability to control who told their
stories and how this happened. Groups that previously had no access to
the medial public sphere now had opportunities to represent themselves
and their own interests. By working together on their own productions,
such groups demystified the medium of television and simultaneously
equipped it with a critical consciousness.
Especially well received in Germany was the work of Hans Magnus
Enzensberger, who in 1970 argued (on the basis of Bertolt Brecht\'s
radio theory) in favor of distinguishing between "repressive" and
"emancipatory" uses of media. For him, the emancipatory potential of
media lay in the fact that "every receiver is \[...\] a potential
transmitter" that can participate "interactively" in "collective
production."[^73^](#c1-note-0073){#c1-note-0073a} In the same year, the
first German video group, Telewissen, debuted in public with a
demonstration in downtown Darmstadt. In 1980, at the peak of the
movement for independent video production, there were approximately a
hundred such groups throughout (West) Germany. The lack of distribution
channels, however, represented a nearly insuperable obstacle and ensured
that many independent productions were seldom viewed outside of
small-scale settings. Tapes had to be exchanged between groups through
the mail, and they were mainly shown at gatherings and events, and in
bars. The dynamic of alternative media shifted toward a small subculture
(though one networked throughout all of Europe) of pirate radio and
television broadcasters. At the beginning of the 1980s and in the space
of Radio Dreyeckland in Freiburg, which had been founded in 1977 as
Radio Verte Fessenheim, operations began at Germany\'s first pirate or
citizens\' radio station, which regularly broadcast information about
the political protest movements that had arisen against the use of
nuclear power in Fessenheim (France), Wyh