algorithm in Stalder 2018


d by machines. This extracted
information is then accessible to human perception and can serve as the
basis of singular and communal activity. Faced with the enormous amount
of data generated by people and machines, we would be blind were it not
for algorithms.

The third chapter will focus on *political dimensions*. These are the
factors that enable the formal dimensions described in the preceding
chapter to manifest themselves in the form of social, political, and
economic projects. Whereas the first ch


lace within a larger framework whose existence and
development depend on []{#Page_58 type="pagebreak" title="58"}communal
formations. "Algorithmicity" denotes those aspects of cultural processes
that are (pre-)arranged by the activities of machines. Algorithms
transform the vast quantities of data and information that characterize
so many facets of present-day life into dimensions and formats that can
be registered by human perception. It is impossible to read the content
of billions of websites. Therefore we turn to services such as Google\'s
search algorithm, which reduces the data flood ("big data") to a
manageable amount and translates it into a format that humans can
understand ("small data"). Without them, human beings could not
comprehend or do anything within a culture built around digital
technolo


ader contexts
and assign more or less clear meanings to them. They consequently become
more open to interpretation. A search result does not articulate an
interpretive field of reference but merely a connection, created by
constantly changing search algorithms, between a request and the corpus
of material, which is likewise constantly changing.

Precisely because it offers so many different approaches to more or less
freely combinable elements of information, []{#Page_70 type="pagebreak"
title="70"}the or


described
above -- referentiality and communality -- are not the only new
mechanisms for filtering, sorting, aggregating, and evaluating things.
Beneath or ahead of the social mechanisms of decentralized and networked
cultural production, there are algorithmic processes that pre-sort the
immeasurably large volumes of data and convert them into a format that
can be apprehended by individuals, evaluated by communities, and
invested with meaning.

Strictly speaking, it is impossible to maintain a categorica


o make such a
distinction can nevertheless be productive in practice, for in this way
it is possible to gain different perspectives about the same object of
investigation.[]{#Page_103 type="pagebreak" title="103"}
:::

::: {.section}
### The rise of algorithms {#c2-sec-0020}

An algorithm is a set of instructions for converting a given input into
a desired output by means of a finite number of steps: algorithms are
used to solve predefined problems. For a set of instructions to become
an algorithm, it has to be determined in three different respects.
First, the necessary steps -- individually and as a whole -- have to be
described unambiguously and completely. To do this, it is usually
neces­sary to use a formal language, such as mathematics,


der to avoid the characteristic imprecision
and ambiguity of natural language and to ensure instructions can be
followed without interpretation. Second, it must be possible in practice
to execute the individual steps together. For this reason, every
algorithm is tied to the context of its realization. If the context
changes, so do the operating processes that can be formalized as
algorithms and thus also the ways in which algorithms can partake in the
constitution of the world. Third, it must be possible to execute an
operating instruction mechanically so that, under fixed conditions, it
always produces the same result.

Defined in such general terms, it would also be possible


tify to the unique nature
of the (unsuccessful) execution, or that, inspired by the micro-trend of
"Ikea hacking," the official instructions are intentionally ignored.

Because such imprecision is supposed to be avoided, the most important
domain of algorithms in practice is mathematics and its implementation
on the computer. The term []{#Page_104 type="pagebreak"
title="104"}"algorithm" derives from the Persian mathematician,
astronomer, and geographer Muḥammad ibn Mūsā al-Khwārizmī. His book *On
t


m of practically realizing such a machine.

The decisive step that turned the vision of calculating machines into
reality was made by Alan Turing in 1937. With []{#Page_105
type="pagebreak" title="105"}a theoretical model, he demonstrated that
every algorithm could be executed by a machine as long as it could read
an incremental set of signs, manipulate them according to established
rules, and then write them out again. The validity of his model did not
depend on whether the machine would be analog or dig


its hardware). The electronic and digital approach came
to be preferred because it was hoped that even the instructions could be
read by the machine itself, so that the machine would be able to execute
not only one but (theoretically) every written algorithm. The
Hungarian-born mathematician John von Neumann made it his goal to
implement this idea. In 1945, he published a model in which the program
(the algorithm) and the data (the input and output) were housed in a
common storage device. Thus, both could be manipulated simultaneously
without having to change the hardware. In this way, he converted the
"Turing machine" into the "universal Turing machine"; tha


same -- a price reduction by a factor of 4 million. And in both
areas, this development has continued without pause.

These increases in performance have formed the material basis for the
rapidly growing number of activities carried out by means of algorithms.
We have now reached a point where Leibniz\'s distinction between
creative mental functions and "simple calculations" is becoming
increasingly fuzzy. Recent discussions about the allegedly threatening
"domination of the computer" have been kindled less by the increased use
of algorithms as such than by the gradual blurring of this distinction
with new possibilities to formalize and mechanize increasing areas of
creative thinking.[^85^](#c2-note-0085){#c2-note-0085a} Activities that
not long ago were reserved for human intelligence,


server
or from the standpoint of one of the two teams. If writing about little
league games, the program can be instructed to ignore the errors made by
children -- because no parent wants to read about those -- and simply
focus on their heroics. The algorithm was soon patented, and a start-up
business was created from the original interdisciplinary research
project: Narrative Science. In addition to sport reports it now offers
texts of all sorts, but above all financial reports -- another field for
which


he percentage of news
that would be written by computers 15 years from now, Narrative
Science\'s chief technology officer and co-founder Kristian Hammond
confidently predicted "\[m\]ore than 90 percent." He added that, within
the next five years, an algorithm could even win a Pulitzer
Prize.[^86^](#c2-note-0086){#c2-note-0086a} This may be blatant hype and
self-promotion but, as a general estimation, Hammond\'s assertion is not
entirely beyond belief. It remains to be seen whether algorithms will
replace or simply supplement traditional journalism. Yet because media
companies are now under strong financial pressure, it is certainly
reasonable to predict that many journalistic texts will be automated in
the future. Entirely different app


ressed interest in Narrative Science and has invested in it through
its venture-capital firm In-Q-Tel, there are indications that
applications are being developed beyond the field of journalism. For the
purpose of spreading propaganda, for instance, algorithms can easily be
used to create a flood of entries on online forums and social mass
media.[^88^](#c2-note-0088){#c2-note-0088a} Narrative Science is only
one of many companies offering automated text analysis and production.
As implemented by IBM and o


Think, too, of the typical statement that is made at the beginning of a
call to a telephone hotline -- "This call may be recorded for training
purposes." Increasingly, this training is not intended for the employees
of the call center but rather for algorithms. The latter are expected to
learn how to recognize the personality type of the caller and, on that
basis, to produce an appropriate script to be read by its poorly
educated and part-time human
co-workers.[^91^](#c2-note-0091){#c2-note-0091a} Another example is the
use of algorithms to grade student
essays,[^92^](#c2-note-0092){#c2-note-0092a} or ... But there is no need
to expand this list any further. Even without additional references to
comparable developments in the fields of image, sound, language, and
film analysis, it is clear by now that, on many fronts, the borders
between the creative and the mechanical have
shifted.[^93^](#c2-note-0093){#c2-note-0093a}
:::

::: {.section}
### Dynamic algorithms {#c2-sec-0021}

The algorithms used for such tasks, however, are no longer simple
sequences of static instructions. They are no longer repeated unchanged,
over and over again, but are dynamic and adaptive to a high degree. The
computing power available today is used to write programs that modify
and improve themselves semi-automatically and in response to feedback.

What this means can be illustrated by the example of evolutionary and
self-learning algorithms. An evolutionary algorithm is developed in an
iterative process that continues to run until the desired result has
been achieved. In most cases, the values of the variables of the first
generation of algorithms are chosen at random in order to diminish the
influence of the programmer\'s presuppositions on the results. These
cannot be avoided entirely, however, because the type of variables
(independent of their value) has to be determined in the first place. I
will return to this problem later on. This is []{#Page_109
type="pagebreak" title="109"}followed by a phase of evaluation: the
output of every tested algorithm is evaluated according to how close it
is to the desired solution. The best are then chosen and combined with
one another. In addition, mutations (that is, random changes) are
introduced. These steps are then repeated as often as necessary until,
according to the specifications in question, the algorithm is
"sufficient" or cannot be improved any further. By means of intensive
computational processes, algorithms are thus "cultivated"; that is,
large numbers of these are tested instead of a single one being designed
analytically and then implemented. At the heart of this pursuit is a
functional solution that proves itself experimentally and in practice,
but


. The fundamental
methods behind this process largely derive from the 1970s (the first
stage of artificial intelligence), the difference being that today they
can be carried out far more effectively. One of the best-known examples
of an evolutionary algorithm is that of Google Flu Trends. In order to
predict which regions will be especially struck by the flu in a given
year, it evaluates the geographic distribution of internet searches for
particular terms ("cold remedies," for instance). To develop the
p



of the national health authorities.[^94^](#c2-note-0094){#c2-note-0094a}

In pursuits of this magnitude, the necessary processes can only be
administered by computer programs. The series of tests are no longer
conducted by programmers but rather by algorithms. In short, algorithms
are implemented in order to write new algorithms or determine their
variables. If this reflexive process, in turn, is built into an
algorithm, then the latter becomes "self-learning": the programmers do
not set the rules for its execution but rather the rules according to
which the algorithm is supposed to know how to accomplish a particular
goal. In many cases, the solution strategies are so complex that they
are incomprehensible in retrospect. They can no longer be tested
logically, only experimentally. Such algorithms are essentially black
boxes -- objects that can only be understood by their outer behavior but
whose internal structure cannot be known.[]{#Page_110 type="pagebreak"
title="110"}

Automatic facial recognition, as used in surveillance technologies an


on the fact that
computers can evaluate large numbers of facial images, first to produce
a general model for a face, then to identify the variables that make a
face unique and therefore recognizable. With so-called "unsupervised" or
"deep-learning" algorithms, some developers and companies have even
taken this a step further: computers are expected to extract faces from
unstructured images -- that is, from volumes of images that contain
images both with faces and without them -- and to do so without
poss


such learning processes. In recent
years, however, there have been enormous leaps in available computing
power, and both the data inputs and the complexity of the learning
models have increased exponentially. Today, on the basis of simple
patterns, algorithms are developing improved recognition of the complex
content of images. They are refining themselves on their own. The term
"deep learning" is meant to denote this very complexity. In 2012, Google
was able to demonstrate the performance capacity of it


videos, analyzed in a cluster by 1,000 computers with 16,000 processors,
it was possible to create a model in just three days that increased
facial recognition in unstructured images by 70
percent.[^95^](#c2-note-0095){#c2-note-0095a} Of course, the algorithm
does not "know" what a face is, but it reliably recognizes a class of
forms that humans refer to as a face. One advantage of a model that is
not created on the basis of prescribed parameters is that it can also
identify faces in non-standard situ­at


on is
in the background, if a face is half-concealed, or if it has been
recorded at a sharp angle). Thanks to this technique, it is possible to
search the content of images directly and not, as before, primarily by
searching their descriptions. Such algorithms are also being used to
identify people in images and to connect them in social networks with
the profiles of the people in question, and this []{#Page_111
type="pagebreak" title="111"}without any cooperation from the users
themselves. Such algorithms are also expected to assist in directly
controlling activity in "unstructured" reality, for instance in
self-driving cars or other autonomous mobile applications that are of
great interest to the military in particular.

Algorithms of this sort can react and adjust themselves directly to
changes in the environment. This feedback, however, also shortens the
timeframe within which they are able to generate repetitive and
therefore predictable results. Thus, algorithms and their predictive
powers can themselves become unpredictable. Stock markets have
frequently experi­enced so-called "sub-second extreme events"; that is,
price fluctuations that happen in less than a
second.[^96^](#c2-note-0096){#c2-note-0096a} D


and was thus
perceptible to humans), have not been terribly
uncommon.[^97^](#c2-note-0097){#c2-note-0097a} With the introduction of
voice commands on mobile phones (Apple\'s Siri, for example, which came
out in 2011), programs based on self-learning algorithms have now
reached the public at large and have infiltrated increased areas of
everyday life.
:::

::: {.section}
### Sorting, ordering, extracting {#c2-sec-0022}

Orders generated by algorithms are a constitutive element of the digital
condition. On the one hand, the mechanical pre-sorting of the
(informational) world is a precondition for managing immense and
unstructured amounts of data. On the other hand, these large amounts of
data and


in which they are stored and processed
provide the material precondition for developing increasingly complex
algorithms. Necessities and possibilities are mutually motivating one
another.[^98^](#c2-note-0098){#c2-note-0098a}

Perhaps the best-known algorithms that sort the digital infosphere and
make it usable in its present form are those of search engines, above
all Google\'s PageRank. Thanks to these, we can find our way around in a
world of unstructured information and transfer increasingly larger pa


ly mean the absence of any structure but rather the presence of
another type of order -- a meta-structure, a potential for order -- out
of which innumerable specific arrangements can be generated on an ad hoc
basis. This meta-structure is created by algorithms. They subsequently
derive from it an actual order, which the user encounters, for instance,
when he or she scrolls through a list of hits produced by a search
engine. What the user does not see are the complex preconditions for
assembling the search


. By the middle of 2014, according to the
company\'s own information, the Google index alone included more than a
hundred million gigabytes of data.

Originally (that is, in the second half of the 1990s), Page­Rank
functioned in such a way that the algorithm analyzed the structure of
links on the World Wide Web, first by noting the number of links that
referred to a given document, and second by evaluating the "relevance"
of the site that linked to the document in question. The relevance of a
site, in tu


ssigned a value, the PageRank. The latter served to present the
documents found with a given search term as a hierarchical list (search
results), whereby the document with the highest value was listed
first.[^99^](#c2-note-0099){#c2-note-0099a} This algorithm was extremely
successful because it reduced the unfathomable chaos of the World Wide
Web to a task that could be managed without difficulty by an individual
user: inputting a search term and selecting from one of the presented
"hits." The simplicity of the user\'s final choice, together with the
quality of the algorithmic pre-selection, quickly pushed Google past its
competition.

Underlying this process is the assumption that every link is an
indication of relevance, and that links from frequently linked (that is,
popular) sources are more important than those from


t can be
understood in terms of purely quantitative variables and it is not
necessary to have any direct understanding of a document\'s content or
of the context in which it exists.

In the middle of the 1990s, when the first version of the PageRank
algorithm was developed, the problem of judging the relevance of
documents whose content could only partially be evaluated was not a new
one. Science administrators at universities and funding agencies had
been facing this difficulty since the 1950s. During th


ere of information is treated as
a self-referential, closed world, and documents are accordingly only
evaluated in terms of their position within this world, though with
quantitative criteria such as "central"/"peripheral."

Even though the PageRank algorithm was highly effective and assisted
Google\'s rapid ascent to a market-leading position, at the beginning it
was still relatively simple and its mode of operation was at least
partially transparent. It followed the classical statistical model of an
algorithm. A document or site referred to by many links was considered
more important than one to which fewer links
referred.[^104^](#c2-note-0104){#c2-note-0104a} The algorithm analyzed
the given structural order of information and determined the position of
every document therein, and this was largely done independently of the
context of the search and without making any assumptions about it. This
approach functioned relat


the time Google was founded, no
one would have thought to check the internet, quickly and while on
one\'s way, for today\'s menu at the restaurant round the corner. Now,
thanks to smartphones, this is an obvious thing to do.
:::

::: {.section}
### Algorithm clouds {#c2-sec-0023}

In order to react to such changes in user behavior -- and simultaneously
to advance it further -- Google\'s search algorithm is constantly being
modified. It has become increasingly complex and has assimilated a
greater amount of contextual []{#Page_115 type="pagebreak"
title="115"}information, which influences the value of a site within
Page­Rank and thus the order of search results. The algorithm is no
longer a fixed object or unchanging recipe but is transforming into a
dynamic process, an opaque cloud composed of multiple interacting
algorithms that are continuously refined (between 500 and 600 times a
year, according to some estimates). These ongoing developments are so
extensive that, since 2003, several new versions of the algorithm cloud
have appeared each year with their own names. In 2014 alone, Google
carried out 13 large updates, more than ever
before.[^105^](#c2-note-0105){#c2-note-0105a}

These changes continue to bring about new levels of abstraction, so that
the algorithm takes into account add­itional variables such as the time
and place of a search, alongside a person\'s previously recorded
behavior -- but also his or her involvement in social environments, and
much more. Personalization and contextualization were made part of
Google\'s search algorithm in 2005. At first it was possible to choose
whether or not to use these. Since 2009, however, they have been a fixed
and binding component for everyone who conducts a search through
Google.[^106^](#c2-note-0106){#c2-note-0106a} By the middle of 2013, the
search algorithm had grown to include at least 200
variables.[^107^](#c2-note-0107){#c2-note-0107a} What is relevant is
that the algorithm no longer determines the position of a document
within a dynamic informational world that exists for everyone
externally. Instead, it now assigns a rank to their content within a
dynamic and singular universe of information that is tailored to every


reated instead of just an excerpt from a previously existing order. The
world is no longer being represented; it is generated uniquely for every
user and then presented. Google is not the only company that has gone
down this path. Orders produced by algorithms have become increasingly
oriented toward creating, for each user, his or her own singular world.
Facebook, dating services, and other social mass media have been
pursuing this approach even more radically than Google.
:::

::: {.section}
### From th


"}shared by everyone) but also information
about every individual\'s own relation to the
latter.[^108^](#c2-note-0108){#c2-note-0108a} To this end, profiles are
established for every user, and the more extensive they are, the better
they are for the algorithms. A profile created by Google, for instance,
identifies the user on three levels: as a "knowledgeable person" who is
informed about the world (this is established, for example, by recording
a person\'s searches, browsing behavior, etc.), as a "physic


eady gone through this sequence of
activity. Or, as the data-mining company Science Rockstars (!) once
pointedly expressed on its website, "Your next activity is a function of
the behavior of others and your own past."

Google and other providers of algorithmically generated orders have been
devoting increased resources to the prognostic capabilities of their
programs in order to make the confusing and potentially time-consuming
step of the search obsolete. The goal is to minimize a rift that comes
to light []{#Page_117 type="pagebreak" title="117"}in the act of
searching, namely that between the world as everyone experiences it --
plagued by uncertainty, for searching implies "not knowing something" --
and the world of algorithmically generated order, in which certainty
prevails, for everything has been well arranged in advance. Ideally,
questions should be answered before they are asked. The first attempt by
Google to eliminate this rift is called Google Now, and its slogan


,
it will send a reminder (sometimes earlier, sometimes later) when it is
time to go. That which Google is just experimenting with and testing in
a limited and unambiguous context is already part of Facebook\'s
everyday operations. With its EdgeRank algorithm, Facebook already
organizes everyone\'s newsfeed, entirely in the background and without
any explicit user interaction. On the basis of three variables -- user
affinity (previous interactions between two users), content weight (the
rate of interaction between all users and a specific piece of content),
and currency (the age of a post) -- the algorithm selects content from
the status updates made by one\'s friends to be displayed on one\'s own
page.[^111^](#c2-note-0111){#c2-note-0111a} In this way, Facebook
ensures that the stream of updates remains easy to scroll through, while
also -- it is safe []{#Page_118 type="pagebreak" title="118"}to assume
-- leaving enough room for advertising. This potential for manipulation,
which algorithms possess as they work away in the background, will be
the topic of my next section.
:::

::: {.section}
### Variables and correlations {#c2-sec-0025}

Every complex algorithm contains a multitude of variables and usually an
even greater number of ways to make connections between them. Every
variable and every relation, even if they are expressed in technical or
mathematical terms, codifies assumptions that express a speci


- data and variables
-- are always already "cooked"; that is, they are engendered through
cultural operations and formed within cultural
categories.[^113^](#c2-note-0113){#c2-note-0113a} With every use of
produced data and with every execution of an algorithm, the assumptions
embedded in them are activated, and the positions contained within them
have effects on the world that the algorithm generates and presents.

As already mentioned, the early version of the PageRank algorithm was
essentially based on the rather simple assumption that frequently linked
content is more relevant than content that is only seldom linked to, and
that links to sites that are themselves frequently linked to should be
given more weight than those


of this list is not just already popular but will remain so. A third
of all users click on the first search result, and around 95 percent do
not look past the first 10.[^114^](#c2-note-0114){#c2-note-0114a} Even
the earliest version of the PageRank algorithm did not represent
existing reality but rather (co-)constituted it.

Popularity, however, is not the only element with which algorithms
actively give shape to the user\'s world. A search engine can only sort,
weigh, and make available that portion of information which has already
been incorporated into its index. Everything else remains invisible. The
relation between []{#Page_119 t


haps the information has been saved in formats that search engines
cannot read or can only poorly read, or perhaps it has been hidden
behind proprietary barriers such as paywalls. In order to expand the
realm of things that can be exploited by their algorithms, the operators
of search engines offer extensive guidance about how providers should
design their sites so that search tools can find them in an optimal
manner. It is not necessary to follow this guidance, but given the
central role of search engine


(almost)
every producer of information to optimize its position in a search
engine\'s index, and thus there is a strong incentive to accept the
preconditions in question. Considering, moreover, the nearly
monopolistic character of many providers of algorithmically generated
orders and the high price that one would have to pay if one\'s own site
were barely (or not at all) visible to others, the term "voluntary"
begins to take on a rather foul taste. This is a more or less subtle way
of pre-formatting the


nd
"relevance" do little, however, to conceal the political nature of
defining variables. Efficient with respect to what? Relevant for whom?
These are issues that are decided without much discussion by the
developers and institutions that regard the algorithms as their own
property. Every now and again such questions incite public debates,
mostly when the interests of one provider happen to collide with those
of its competition. Thus, for instance, the initiative known as
FairSearch has argued that Google


er the juridical person of the copyright holder. It was according to
the latter\'s interests and preferences that searching was being
reoriented. Amazon has employed similar tactics. In 2014, the online
merchant changed its celebrated recommendation algorithm with the goal
of reducing the presence of books released by irritating publishers that
dared to enter into price negotiations with the
company.[^122^](#c2-note-0122){#c2-note-0122a}

Controversies over the methods of Amazon or Google, however, are the
exception rather than the rule. Necessary (but never neutral) decisions
about recording and evaluating data []{#Page_121 type="pagebreak"
title="121"}with algorithms are being made almost all the time without
any discussion whatsoever. The logic of the original Page­Rank algorithm
was criticized as early as the year 2000 for essentially representing
the commercial logic of mass media, systematically disadvantaging
less-popular though perhaps otherwise relevant information, and thus
undermining the "substantive vision of the web as an inclusive
democratic space."[^123^](#c2-note-0123){#c2-note-0123a} The changes to
the search algorithm that have been adopted since then may have modified
this tendency, but they have certainly not weakened it. In addition to
concentrating on what is popular, the new variables privilege recently
uploaded and constantly updated content. The selection of search results
is now contingent upon the location of the user, and it takes into
account his or her social networking. It is oriented toward the average
of a dynamically modeled group. In other words, Google\'s new algorithm
favors that which is gaining popularity within a user\'s social network.
The global village is thus becoming more and more
provincial.[^124^](#c2-note-0124){#c2-note-0124a}
:::

::: {.section}
### Data behaviorism {#c2-sec-0026}

Algorithms such as G


plexity of information, they direct their gaze inward, which is not
to say toward the inner being of individual people. As a level of
reference, the individual person -- with an interior world and with
ideas, dreams, and wishes -- is irrelevant. For algorithms, people are
black boxes that can only be understood in terms of their reactions to
stimuli. Consciousness, perception, and intention do not play any role
for them. In this regard, the legal philosopher Antoinette Rouvroy has
written about "data beha


epending on the
context and the need, individuals can either be assigned to this
function or removed from it. All of this happens behind the user\'s back
and in accordance with the goals and pos­itions that are relevant to the
developers of a given algorithm, be it to optimize profit or
surveillance, create social norms, improve services, or whatever else.
The results generated in this way are sold to users as a personalized
and efficient service that provides a quasi-magical product. Out of the
enormous


e have been looking for. At
best, it is only partially transparent how these results came about and
which positions in the world are strengthened or weakened by them. Yet,
as long as the needle is somewhat functional, most users are content,
and the algorithm registers this contentedness to validate itself. In
this dynamic world of unmanageable complexity, users are guided by a
sort of radical, short-term pragmatism. They are happy to have the world
pre-sorted for them in order to improve their activity i


hne uns* (Reinbeck bei Hamburg: Rowohlt, 2011). One
could also say that this anxiety has been caused by the fact that the
automation of labor has begun to affect middle-class jobs as well.

[86](#c2-note-0086a){#c2-note-0086}  Steven Levy, "Can an Algorithm
Write a Better News Story than a Human Reporter?" *Wired* (April 24,
2012), online.

[87](#c2-note-0087a){#c2-note-0087}  Alexander Pschera, *Animal
Internet: Nature and the Digital Revolution*, trans. Elisabeth Laufer
(New York: New Vessel Press,


nce in Deep Time,"
in Anselm []{#Page_193 type="pagebreak" title="193"}Franke et al. (eds),
*Forensis: The Architecture of Public Truth* (Berlin: Sternberg Press,
2014), pp. 125--46.

[98](#c2-note-0098a){#c2-note-0098}  Another facial recognition
algorithm by Google provides a good impression of the rate of progress.
As early as 2011, the latter was able to identify dogs in images with 80
percent accuracy. Three years later, this rate had not only increased to
93.5 percent (which corresponds to human capabilities), but the
algorithm could also identify more than 200 different types of dog,
something that hardly any person can do. See Robert McMillan, "This Guy
Beat Google\'s Super-Smart AI -- But It Wasn\'t Easy," *Wired* (January
15, 2015), online.

[99](#c2-note-0099a){#c2-not


" in Konrad Becker and Felix Stalder (eds), *Deep Search:
Die Politik des Suchens jenseits von Google* (Innsbruck: Studienverlag,
2009), pp. 64--83.

[104](#c2-note-0104a){#c2-note-0104}  A site with zero links to it could
not be registered by the algorithm at all, for the search engine indexed
the web by having its "crawler" follow the links itself.

[105](#c2-note-0105a){#c2-note-0105}  "Google Algorithm Change History,"
[moz.com](http://moz.com) (2016), online.

[106](#c2-note-0106a){#c2-note-0106}  Martin Feuz et al., "Personal Web
Searching in the Age of Semantic Capitalism: Diagnosing the Mechanisms
of Personalisation," *First Monday* 17 (2011)


drivers should be sent on a detour, so that no traffic jam comes about,
and which should be shown the most direct route, which would now be
traffic-free.

[111](#c2-note-0111a){#c2-note-0111}  Pamela Vaughan, "Demystifying How
Facebook\'s EdgeRank Algorithm Works," *HubSpot* (April 23, 2013),
online.

[112](#c2-note-0112a){#c2-note-0112}  Lisa Gitelman (ed.), *"Raw Data"
Is an Oxymoron* (Cambridge, MA: MIT Press, 2013).

[113](#c2-note-0113a){#c2-note-0113}  The terms "raw," in the sense of
unproces


en," in
Konrad Becker and Felix Stalder (eds), *Deep Search: Die Politik des
Suchens jenseits von Google* (Innsbruck: Studienverlag, 2009), pp.
133--48.

[117](#c2-note-0117a){#c2-note-0117}  The phenomenon of preparing the
world to be recorded by algorithms is not restricted to digital
networks. As early as 1994 in Germany, for instance, a new sort of
typeface was introduced (the *Fälschungserschwerende Schrift*,
"forgery-impeding typeface") on license plates for the sake of machine
readability and fa


 "Antitrust: Commission Sends
Statement of Objections to Google on Comparison Shopping Service,"
*European Commission: Press Release Database* (April 15, 2015), online.

[120](#c2-note-0120a){#c2-note-0120}  Amit Singhal, "An Update to Our
Search Algorithms," *Google Inside Search* (August 10, 2012), online. By
the middle of 2014, according to some sources, Google had received
around 20 million requests to remove links from its index on account of
copyright violations.

[121](#c2-note-0121a){#c2-note-0


cal and Scholarly Phenomenon," *Information, Communication &
Society* 15 (2012): 662--79.
:::
:::

[III]{.chapterNumber} [Politics]{.chapterTitle} {#c3}

::: {.section}
Referentiality, communality, and algorithmicity have become the
characteristic forms of the digital condition because more and more
people -- in more and more segments of life and by means of increasingly
complex technologies -- are actively (or compulsorily) participating in
the negotiation


anner are ideal-typical manifestations of
the power of networks.

The problem experienced by the unwilling-willing users of Facebook has
not been caused by the transformation of communication into data as
such. This is necessary to provide input for algorithms, which turn the
flood of information into something usable. To this extent, the general
complaint about the domination of algorithms is off the mark. The
problem is not the algorithms themselves but rather the specific
capitalist and post-democratic setting in which they are implemented.
They only become an instrument of domin­ation when open and
decentralized activities are transferred into closed and centralized
structures in


, fundamental decision-making powers and
possibilities for action are embedded that legitimize themselves purely
on the basis of their output. Or, to adapt the title of Rosa von
Praunheim\'s film, which I discussed in my first chapter: it is not the
algorithm that is perverse, but the situation in which it lives.
:::

::: {.section}
### Political surveillance {#c3-sec-0008}

In June 2013, Edward Snowden exposed an additional and especially
problematic aspect of the expansion of post-democratic structures:


undermine the solidarity model of health
insurance.[^55^](#c3-note-0055){#c3-note-0055a}

According to the legal scholar Frank Pasquale, the sum of all these
developments has led to a black-box society: More social processes are
being controlled by algorithms whose operations are not transparent
because they are shielded from the outside world and thus from
democratic control.[^56^](#c3-note-0056){#c3-note-0056a} This
ever-expanding "post-democracy" is not simply liberal democracy with a
few problems tha


ject,
which is organized through specialized infrastructure and by local and
international communities, also utilizes a number of automated
processes. These are so important that not only was a "mechanical edit
policy" developed to govern the use of algorithms for editing; the
latter policy was also supplemented by an "automated edits code of
conduct," which defines further rules of behavior. Regarding the
implementation of a new algorithm, for instance, the code states: "We do
not require or recommend a formal vote, but if there []{#Page_165
type="pagebreak" title="165"}is significant objection to your plan --
and even minorities may be significant! -- then change it or drop it
altoge


his to
occur, it is necessary to provide data in a standard-compatible format
that is machine-readable. Only in such a way can they be browsed by
algorithms and further processed. Open data are an important
precondition for implementing the power of algorithms in a democratic
manner. They ensure that there can be an effective diversity of
algorithms, for anyone can write his or her own algorithm or commission
others to process data in various ways and in light of various
interests. Because algorithms cannot be neutral, their diversity -- and
the resulting ability to compare the results of different methods -- is
an important precondition for them not becoming an uncontrollable
instrument of power. This can be achieved most dependably through fre


4, for instance, Google
announced that it would support "end-to-end" encryption for emails. See
"Making End-to-End Encryption Easier to Use," *Google Security Blog*
(June 3, 2014), online.

[13](#c3-note-0013a){#c3-note-0013}  Not all services use algorithms to
sort through data. Twitter does not filter the news stream of individual
users but rather allows users to create their own lists or to rely on
external service providers to select and configure them. This is one of
the reasons why Twitter is rega


actual content of communication. In
practice, however, the two categories cannot always be sharply
distinguished from one another.

[42](#c3-note-0042a){#c3-note-0042}  By manipulating online polls, for
instance, or flooding social mass media with algorithmically generated
propaganda. See Glen Greenwald, "Hacking Online Polls and Other Ways
British Spies Seek to Control the Internet," *The Intercept* (July 14,
2014), online.

[43](#c3-note-0043a){#c3-note-0043}  Jeremy Scahill and Glenn Greenwald,
"Th


ote-0055}  Rainer Schneider, "Rabatte für
Gesundheitsdaten: Was die deutschen Krankenversicherer planen," *ZDNet*
(December 18, 2014), online \[--trans.\].

[56](#c3-note-0056a){#c3-note-0056}  Frank Pasquale, *The Black Box
Society: The Secret Algorithms that Control Money and Information*
(Cambridge, MA: Harvard University Press, 2015).

[57](#c3-note-0057a){#c3-note-0057}  "Facebook Gives People around the
World the Power to Publish Their Own Stories," *Facebook Help Center*
(2017), online.

[58

 

Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.