behavior in Stalder 2018


modulate various dimensions of existence -- from aesthetic
preferences to the methods of biological reproduction and the rhythms of
space and time. In these worlds, the dynamics of network power have
reconfigured notions of voluntary and involuntary behavior, autonomy,
and coercion. The third feature of the new cultural landscape is its
*algorithmicity*. It is characterized, in other []{#Page_5
type="pagebreak" title="5"}words, by automated decision-making processes
that reduce and give shape to the glut


ture -- under which this activity has to take place. Social
mass media such as Facebook and Google will receive particular attention
as the most conspicuous manifestations of this tendency. Here, under new
structural provisions, a new combination of behavior and thought has
been implemented that promotes the normalization of post-democracy and
contributes to its otherwise inexplicable acceptance in many areas of
society. "Commons," on the contrary, denotes approaches for developing
new and comprehensive


questions that these crises had
prompted. It was thus a combination of positive vision and pressure that
motivated a great variety of actors to change, at times with
considerable effort, the established processes, mature institutions, and
their own behavior. They intended to appropriate, for their own
projects, the various and partly contradictory possibilities that they
saw in these new technologies. Only then did a new technological
infrastructure arise.

This, in turn, created the preconditions for p


te-0070a} This, he
believed, happens independently of and in addition to whatever specific
message a medium might be conveying. From this perspective, reality does
not exist outside of media, given that media codetermine our personal
relation to and behavior in the world. For McLuhan and the Toronto
School, media were thus not channels for transporting content but rather
the all-encompassing environments -- galaxies -- in which we live.

Such ideas were circulating much earlier and were intensively devel


te-0001}  Kathrin Passig and Sascha Lobo,
*Internet: Segen oder Fluch* (Berlin: Rowohlt, 2012) \[--trans.\].

[2](#c1-note-0002a){#c1-note-0002}  The expression "heteronormatively
behaving" is used here to mean that, while in the public eye, the
behavior of the people []{#Page_177 type="pagebreak" title="177"}in
question conformed to heterosexual norms regardless of their personal
sexual orientations.

[3](#c1-note-0003a){#c1-note-0003}  No order is ever entirely closed
off. In this case, too, ther


of
sociability\'s power are diffuse and omnipresent. They are not
repressive but rather constitutive. No one forces a scientist to publish
in English or a woman editor to tolerate disparaging remarks on
Wikipedia. People accept these often implicit behavioral norms (sexist
comments are permitted, for instance) out of their own interests in
order to acquire access to the resources circulating within the networks
and to constitute themselves within it. In this regard, Singh
distinguishes between the "intr


to create other advantages for oneself.
:::

::: {.section}
### Homogeneity, difference and authority {#c2-sec-0017}

Protocols are present on more than a technical level; as interpretive
frameworks, they structure viewpoints, rules, and patterns of behavior
on all levels. Thus, they provide a degree of cultural homogeneity, a
set of commonalities that lend these new formations their communal
nature. Viewed from the outside, these formations therefore seem
inclined toward consensus and uniformity, for th


ough the production
of differences; that is, by constantly changing their common ground.
Those who are able to add many novel aspects to the common resources
gain a degree of authority. They assume central positions and they
influence, through their behavior, the development of the field more
than others do. However, their authority, influence, and de facto power
are not based on any means of coercion. As Niklas Luhmann noted, "In the
end, one participant\'s achievements in making selections \[...\] are


ases, the solution strategies are so complex that they
are incomprehensible in retrospect. They can no longer be tested
logically, only experimentally. Such algorithms are essentially black
boxes -- objects that can only be understood by their outer behavior but
whose internal structure cannot be known.[]{#Page_110 type="pagebreak"
title="110"}

Automatic facial recognition, as used in surveillance technologies and
for authorizing access to certain things, is based on the fact that
computers can evaluate


et, quickly and while on
one\'s way, for today\'s menu at the restaurant round the corner. Now,
thanks to smartphones, this is an obvious thing to do.
:::

::: {.section}
### Algorithm clouds {#c2-sec-0023}

In order to react to such changes in user behavior -- and simultaneously
to advance it further -- Google\'s search algorithm is constantly being
modified. It has become increasingly complex and has assimilated a
greater amount of contextual []{#Page_115 type="pagebreak"
title="115"}information, which


^105^](#c2-note-0105){#c2-note-0105a}

These changes continue to bring about new levels of abstraction, so that
the algorithm takes into account add­itional variables such as the time
and place of a search, alongside a person\'s previously recorded
behavior -- but also his or her involvement in social environments, and
much more. Personalization and contextualization were made part of
Google\'s search algorithm in 2005. At first it was possible to choose
whether or not to use these. Since 2009, however,


r
they are for the algorithms. A profile created by Google, for instance,
identifies the user on three levels: as a "knowledgeable person" who is
informed about the world (this is established, for example, by recording
a person\'s searches, browsing behavior, etc.), as a "physical person"
who is located and mobile in the world (a component established, for
example, by tracking someone\'s location through a smartphone, sensors
in a smart home, or body signals), and as a "social person" who
interacts with


ta shadows." They no longer represent what is
conventionally referred to as "individuality," in the sense of a
spatially and temporally uniform identity. On the one hand, profiles
rather consist of sub-individual elements -- of fragments of recorded
behavior that can be evaluated on the basis of a particular search
without promising to represent a person as a whole -- and they consist,
on the other hand, of clusters of multiple people, so that the person
being modeled can simultaneously occupy different


part of every person\'s profile,
a certain percentage of them have already gone through this sequence of
activity. Or, as the data-mining company Science Rockstars (!) once
pointedly expressed on its website, "Your next activity is a function of
the behavior of others and your own past."

Google and other providers of algorithmically generated orders have been
devoting increased resources to the prognostic capabilities of their
programs in order to make the confusing and potentially time-consuming
step o


group. In other words, Google\'s new algorithm
favors that which is gaining popularity within a user\'s social network.
The global village is thus becoming more and more
provincial.[^124^](#c2-note-0124){#c2-note-0124a}
:::

::: {.section}
### Data behaviorism {#c2-sec-0026}

Algorithms such as Google\'s thus reiterate and reinforce a tendency
that has already been apparent on both the level of individual users and
that of communal formations: in order to deal with the vast amounts and
complexity of inf


thms, people are
black boxes that can only be understood in terms of their reactions to
stimuli. Consciousness, perception, and intention do not play any role
for them. In this regard, the legal philosopher Antoinette Rouvroy has
written about "data behaviorism."[^125^](#c2-note-0125){#c2-note-0125a}
With this, she is referring to the gradual return of a long-discredited
approach to behavioral psychology that postulated that human behavior
could be explained, predicted, and controlled purely by our outwardly
observable and measurable actions.[^126^](#c2-note-0126){#c2-note-0126a}
Psychological dimensions were ignored (and are ignored in this new
version of behaviorism) because it is difficult to observe them
empiric­ally. Accordingly, this approach also did away with the need
[]{#Page_122 type="pagebreak" title="122"}to question people directly or
take into account their subjective experiences, thoughts, and f


closing
information. Any strictly empirical science, or so the thinking went,
required its practitioners to disregard everything that did not result
in physical and observable action. From this perspective, it was
possible to break down even complex behavior into units of stimulus and
reaction. This led to the conviction that someone observing another\'s
activity always knows more than the latter does about himself or herself
for, unlike the person being observed, whose impressions can be
inaccurate, the



was held to be mechanistic, reductionist, and authoritarian because it
privileged the observing scientist over the subject. In practice, it
quickly ran into its own limitations: it was simply too expensive and
complicated to gather data about human behavior.

Yet that has changed radically in recent years. It is now possible to
measure ever more activities, conditions, and contexts empirically.
Algorithms like Google\'s or Amazon\'s form the technical backdrop for
the revival of a mechanistic, reduction


Hilde­brandt (eds), *Privacy, Due Process and the Computational
Turn: The Philosophy of Law Meets the Philosophy of Technology* (New
York: Routledge, 2013), pp. 143--65.

[126](#c2-note-0126a){#c2-note-0126}  See B. F. Skinner, *Science and
Human Behavior* (New York: The Free Press, 1953), p. 35: "We undertake
to predict and control the behavior of the individual organism. This is
our 'dependent variable' -- the effect for which we are to find the
cause. Our 'independent variables' -- the causes of behavior -- are the
external conditions of which behavior is a function."

[127](#c2-note-0127a){#c2-note-0127}  Nathan Jurgenson, "View from
Nowhere: On the Cultural Ideology of Big Data," *New Inquiry* (October
9, 2014), online.

[128](#c2-note-0128a){#c2-note-0128}  danah boyd and Kate Crawford,
"Cri


sec-0005}

Unequal access to information has resulted in an imbalance of power, for
the evaluation of data opens up new possibilities for action. Such data
can be used, first, to earn revenue from personalized advertisements;
second, to predict user behavior with greater accuracy; and third, to
adjust the parameters of interaction in such a way that preferred
patterns of []{#Page_135 type="pagebreak" title="135"}behavior become
more likely. Almost all commercially driven social mass media are
financed by advertising. In 2014, Facebook, Google, and Twitter earned
90 percent of their revenue through such means. It is thus important for
these companies to learn as much


n detected.
To do so, it is necessary to have immense numbers of users generating
immense volumes of data. Accordingly, these new []{#Page_136
type="pagebreak" title="136"}analytic possibilities do not mean that
Facebook can accur­ately predict the behavior of a single user. The
unique person remains difficult to calculate, for all that could be
ascertained from this information would be a minimally different
probability of future behavior. As regards a single person, this gain in
knowledge would not be especially useful, for a slight change in
probability has no predictive power on a case-by-case basis. If, in the
case of a unique person, the probability of a particular future action
climbs from, say, 30 to 31 percent, then not much is gained with respect
to predicting this one person\'s behavior. If vast numbers of similar
people are taken into account, however, then the power of prediction
increases enormously. If, in the case of 1 million people, the
probability of a future action increases by 1 percent, this means that,
in the future, aro


o common that no one at
Facebook could see anything especially wrong with the
experiment.[^26^](#c3-note-0026){#c3-note-0026a}

Why would they? All commercially driven social mass media conduct
manipulative experiments. From the perspective of "data behaviorism,"
this is the best way to acquire feedback from users -- far better than
direct surveys.[^27^](#c3-note-0027){#c3-note-0027a} Facebook had also
already conducted experiments in order to intervene directly in
political processes. On November 2, 201


the polls than those in the control group. In relation to a single
person, the extent of this influence was thus extremely weak and barely
relevant. Indeed, it would be laughable even to speak of influence at
all if only 250 people had altered their behavior. Personal experience
suggests that one cannot be manipulated by such things. It would be
false to conclude, however, that such interventions are irrelevant, for
matters are entirely different where large groups are concerned. On
account of Facebook\'


of social gravity, guide things in a certain direction. At work here is
the fundamental insight of cybernetics, namely that the "target" to be
met -- be it an enemy bomber,[^31^](#c3-note-0031){#c3-note-0031a} a
citizen, or a customer -- orients its behavior to its environment, to
which it is linked via feedback. From this observation, cybernetically
oriented social planners soon drew the conclusion that the best (because
indirect and hardly perceptible) method for influencing the "target"
would be to al


ollective Tiqqun,
this paradox was resolved by the introduction of "a new fable that,
after the Second World War, definitively \[...\] supplanted the liberal
hypothesis. Contrary to the latter, it proposes to conceive biological,
physical and social behaviors as something integrally programmed and
re-programmable."[^33^](#c3-note-0033){#c3-note-0033a} By the term
"liberal hypothesis," Tiqqun meant the assumption, stemming from the
time of the Enlightenment, that people could improve themselves by
applyin


to its conception of animals, plants,
and machines; like the latter, people are organisms that react to
stimuli from their environment. The hypothesis is thus associated with
the theories of "instrumental conditioning," which had been formulated
by behaviorists during the 1940s. In the case of both humans and other
animals, as it was argued, learning is not a process of understanding
but rather one of executing a pattern of stimulus and response. To learn
is thus to adopt a pattern of behavior with which one\'s own activity
elicits the desired reaction. In this model, understanding does not play
any role; all that matters is
behavior.[^34^](#c3-note-0034){#c3-note-0034a}

And this behavior, according the cybernetic hypothesis, can be
programmed not by directly accessing people (who are conceived as
impenetrable black boxes) but rather by indirectly altering the
environment, with which organisms and machines are linked via feedback.
The


case-by-case basis, the effects of this are often
minimal for the individual. In aggregate and over long periods of time,
however, the effects can be substantial without the individual even
being able to detect them. Yet the practice of controlling behavior by
manipulating the environment is not limited to the environment of
information. In their enormously influential book from 2008, *Nudge*,
Richard Thaler and Cass Sunstein even recommended this as a general
method for "nudging" people, almost without


till
limited to the high end of the market, and smart meters, which have been
implemented across all social
strata.[^51^](#c3-note-0051){#c3-note-0051a} The latter provide
electricity companies with detailed real-time data about a household\'s
usage behavior and are supposed to enhance energy efficiency, but it
remains unclear exactly how this new efficiency will be
achieved.[^52^](#c3-note-0052){#c3-note-0052a} The concept of the "smart
city" extends this process to entire municipalities. Over the cours


ted
processes. These are so important that not only was a "mechanical edit
policy" developed to govern the use of algorithms for editing; the
latter policy was also supplemented by an "automated edits code of
conduct," which defines further rules of behavior. Regarding the
implementation of a new algorithm, for instance, the code states: "We do
not require or recommend a formal vote, but if there []{#Page_165
type="pagebreak" title="165"}is significant objection to your plan --
and even minorities may be


logy, Psychiatry, Evolution and
Epistemology* (London: Jason Aronson, 1972), pp. 166--82, at 177.

[33](#c3-note-0033a){#c3-note-0033}  Tiqqun, "The Cybernetic
Hypothesis," p. 4 (online).

[34](#c3-note-0034a){#c3-note-0034}  B. F. Skinner, *The Behavior of
Organisms: An Experimental Analysis* (New York: Appleton Century, 1938).

[35](#c3-note-0035a){#c3-note-0035}  Richard H. Thaler and Cass
Sunstein, *Nudge: Improving Decisions about Health, Wealth and
Happiness* (New York: Penguin, 2008).

[36](

 

Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.