Barok
Techniques of Publishing
2014


Techniques of Publishing

Draft translation of a talk given at the seminar Informace mezi komoditou a komunitou [The Information Between Commodity and Community] held at Tranzitdisplay in Prague, Czech Republic, on May 6, 2014

My contribution has three parts. I will begin by sketching the current environment of publishing in general, move on to some of the specificities of publishing
in the humanities and art, and end with a brief introduction to the Monoskop
initiative I was asked to include in my talk.
I would like to thank Milos Vojtechovsky, Matej Strnad and CAS/FAMU for
the invitation, and Tranzitdisplay for hosting this seminar. It offers itself as an
opportunity for reflection for which there is a decent distance from a previous
presentation of Monoskop in Prague eight years ago when I took part in a new
media education workshop prepared by Miloš and Denisa Kera. Many things
changed since then, not only in new media, but in the humanities in general,
and I will try to articulate some of these changes from today’s perspective and
primarily from the perspective of publishing.

I. The Environment of Publishing
One change, perhaps the most serious, and which indeed relates to the humanities
publishing as well, is that from a subject that was just a year ago treated as a paranoia of a bunch of so called technological enthusiasts, is today a fact with which
the global public is well acquainted: we are all being surveilled. Virtually every
utterance on the internet, or rather made by means of the equipment connected
to it through standard protocols, is recorded, in encrypted or unencrypted form,
on servers of information agencies, besides copies of a striking share of these data
on servers of private companies. We are only at the beginning of civil mobilization towards reversal of the situation and the future is open, yet nothing suggests
so far that there is any real alternative other than “to demand the impossible.”
There are at least two certaintes today: surveillance is a feature of every communication technology controlled by third parties, from post, telegraphy, telephony
to internet; and at the same time it is also a feature of the ruling power in all its
variants humankind has come to know. In this regard, democracy can be also understood as the involvement of its participants in deciding on the scale and use of
information collected in this way.
I mention this because it suggests that also all publishing initiatives, from libraries,
through archives, publishing houses to schools have their online activities, back1

ends, shared documents and email communication recorded by public institutions–
which intelligence agencies are, or at least ought to be.
In regard to publishing houses it is notable that books and other publications today are printed from digital files, and are delivered to print over email, thus it is
not surprising to claim that a significant amount of electronically prepared publications is stored on servers in the public service. This means that besides being
required to send a number of printed copies to their national libraries, in fact,
publishers send their electronic versions to information agencies as well. Obviously, agencies couldn’t care less about them, but it doesn’t change anything on
the likely fact that, whatever it means, the world’s largest electronic repository of
publications today are the server farms of the NSA.
Information agencies archive publications without approval, perhaps without awareness, and indeed despite disapproval of their authors and publishers, as an
“incidental” effect of their surveillance techniques. This situation is obviously
radically different from a totalitarianism we got to know. Even though secret
agencies in the Eastern Bloc were blackmailing people to produce miserable literature as their agents, samizdat publications could at least theoretically escape their
attention.
This is not the only difference. While captured samizdats were read by agents of
flesh and blood, publications collected through the internet surveillance are “read”
by software agents. Both of them scan texts for “signals”, ie. terms and phrases
whose occurrences trigger interpretative mechanisms that control operative components of their organizations.
Today, publishing is similarly political and from the point of view of power a potentially subversive activity like it was in the communist Czechoslovakia. The
difference is its scale, reach and technique.
One of the messages of the recent “revelations” is that while it is recommended
to encrypt private communication, the internet is for its users also a medium of
direct contact with power. SEO, or search engine optimization, is now as relevant technique for websites as for books and other publications since all of them
are read by similar algorithms, and authors can read this situation as a political
dimension of their work, as a challenge to transform and model these algorithms
by texts.

2

II. Techniques of research in the humanities literature
Compiling the bibliography
Through the circuitry we got to the audience, readers. Today, they also include
software and algorithms such as those used for “reading” by information agencies
and corporations, and others facilitating reading for the so called ordinary reader,
the reader searching information online, but also the “expert” reader, searching
primarily in library systems.
Libraries, as we said, are different from information agencies in that they are
funded by the public not to hide publications from it but to provide access to
them. A telling paradox of the age is that on the one hand information agencies
are storing almost all contemporary book production in its electronic version,
while generally they absolutely don’t care about them since the “signal” information lies elsewhere, and on the other in order to provide electronic access, paid or
direct, libraries have to costly scan also publications that were prepared for print
electronically.
A more remarkable difference is, of course, that libraries select and catalogize
publications.
Their methods of selection are determined in the first place by their public institutional function of the protector and projector of patriotic values, and it is reflected
in their preference of domestic literature, ie. literature written in official state languages. Methods of catalogization, on the other hand, are characterized by sorting
by bibliographic records, particularly by categories of disciplines ordered in the
tree structure of knowledge. This results in libraries shaping the research, including academic research, towards a discursivity that is national and disciplinary, or
focused on the oeuvre of particular author.
Digitizing catalogue records and allowing readers to search library indexes by their
structural items, ie. the author, publisher, place and year of publication, words in
title, and disciplines, does not at all revert this tendency, but rather extends it to
the web as well.
I do not intend to underestimate the value and benefits of library work, nor the
importance of discipline-centered writing or of the recognition of the oeuvre of
the author. But consider an author working on an article who in the early phase
of his research needs to prepare a bibliography on the activity of Fluxus in central Europe or on the use of documentary film in education. Such research cuts
through national boundaries and/or branches of disciplines and he is left to travel
not only to locate artefacts, protagonists and experts in the field but also to find
literature, which in turn makes even the mere process of compiling bibliography
relatively demanding and costly activity.
3

In this sense, the digitization of publications and archival material, providing their
free online access and enabling fulltext search, in other words “open access”, catalyzes research across political-geographical and disciplinary configurations. Because while the index of the printed book contains only selected terms and for
the purposes of searching the index across several books the researcher has to have
them all at hand, the software-enabled search in digitized texts (with a good OCR)
works with the index of every single term in all of them.
This kind of research also obviously benefits from online translation tools, multilingual case bibliographies online, as well as second hand bookstores and small
specialized libraries that provide a corrective role to public ones, and whose “open
access” potential has been explored to the very small extent until now, but which
I won’t discuss here further for the lack of time.
Writing
The disciplinarity and patriotism are “embedded” in texts themselves, while I repeat that I don’t say this in a pejorative way.
Bibliographic records in bodies of texts, notes, attributions of sources and appended references can be read as formatted addresses of other texts, making apparent a kind of intertextual structure, well known in hypertext documents. However, for the reader these references are still “virtual”. When following a reference
she is led back to a library, and if interested in more references, to more libraries.
Instead, authors assume certain general erudition of their readers, while following references to their very sources is perceived as an exception from the standard
self-limitation to reading only the body of the text. Techniques of writing with
virtual bibliography thus affirm national-disciplinary discourses and form readers
and authors proficient in the field of references set by collections of local libraries
and so called standard literature of fields they became familiar with during their
studies.
When in this regime of writing someone in the Czech Republic wants to refer to
the work of Gilbert Simondon or Alexander Bogdanov, to give an example, the
effect of his work will be minimal, since there was practically nothing from these
authors translated into Czech. His closely reading colleague is left to try ordering
books through a library and wait for 3-4 weeks, or to order them from an online
store, travel to find them or search for them online. This applies, in the case of
these authors, for readers in the vast majority of countries worldwide. And we can
tell with certainty that this is not only the case of Simondon and Bogdanov but
of the vast majority of authors. Libraries as nationally and pyramidally situated
institutions face real challenges in regard to the needs of free research.
This is surely merely one aspect of techniques of writing.
4

Reading
Reading texts with “live” references and bibliographies using electronic devices is
today possible not only to imagine but to realise as well. This way of reading
allows following references to other texts, visual material, other related texts of
an author, but also working with occurrences of words in the text, etc., bringing
reading closer to textual analysis and other interesting levels. Due to the time
limits I am going to sketch only one example.
Linear reading is specific by reading from the beginning of the text to its end,
as well as ‘tree-like’ reading through the content structure of the document, and
through occurrences of indexed words. Still, techniques of close reading extend
its other aspect – ‘moving’ through bibliographic references in the document to
particular pages or passages in another. They make the virtual reference plastic –
texts are separated one from another merely by a click or a tap.
We are well familiar with a similar movement through the content on the web
– surfing, browsing, and clicking through. This leads us to an interesting parallel: standards of structuring, composing, etc., of texts in the humanities has been
evolving for centuries, what is incomparably more to decades of the web. From
this stems also one of the historical challenges the humanities are facing today:
how to attune to the existence of the web and most importantly to epistemological consequences of its irreversible social penetration. To upload a PDF online is
only a taste of changes in how we gain and make knowledge and how we know.
This applies both ways – what is at stake is not only making production of the
humanities “available” online, it is not only about open access, but also about the
ways of how the humanities realise the electronic and technical reality of their
own production, in regard to the research, writing, reading, and publishing.
Publishing
The analogy between information agencies and national libraries also points to
the fact that large portion of publications, particularly those created in software,
is electronic. However the exceptions are significant. They include works made,
typeset, illustrated and copied manually, such as manuscripts written on paper
or other media, by hand or using a typewriter or other mechanic means, and
other pre-digital techniques such as lithography, offset, etc., or various forms of
writing such as clay tablets, rolls, codices, in other words the history of print and
publishing in its striking variety, all of which provide authors and publishers with
heterogenous means of expression. Although this “segment” is today generally
perceived as artists’ books interesting primarily for collectors, the current process
of massive digitization has triggered the revival, comebacks, transformations and
5

novel approaches to publishing. And it is these publications whose nature is closer
to the label ‘book’ rather than the automated electro-chemical version of the offset
lithography of digital files on acid-free paper.
Despite that it is remarkable to observe a view spreading among publishers that
books created in software are books with attributes we have known for ages. On
top of that there is a tendency to handle files such as PDFs, EPUBs, MOBIs and
others as if they are printed books, even subject to the rules of limited edition, a
consequence of what can be found in the rise of so called electronic libraries that
“borrow” PDF files and while someone reads one, other users are left to wait in
the line.
Whilst, from today’s point of view of the humanities research, mass-printed books
are in the first place archives of the cultural content preserved in this way for the
time we run out of electricity or have the internet ‘switched off’ in some other
way.

III. Monoskop
Finally, I am getting to Monoskop and to begin with I am going to try to formulate
its brief definition, in three versions.
From the point of view of the humanities, Monoskop is a research, or questioning, whose object’s nature renders no answer as definite, since the object includes
art and culture in their widest sense, from folk music, through visual poetry to
experimental film, and namely their history as well as theory and techniques. The
research is framed by the means of recording itself, what makes it a practise whose
record is an expression with aesthetic qualities, what in turn means that the process of the research is subject to creative decisions whose outcomes are perceived
esthetically as well.
In the language of cultural management Monoskop is an independent research
project whose aim is subject to change according to its continual findings; which
has no legal body and thus as organisation it does not apply for funding; its participants have no set roles; and notably, it operates with no deadlines. It has a reach
to the global public about which, respecting the privacy of internet users, there
are no statistics other than general statistics on its social networks channels and a
figure of numbers of people and bots who registered on its website and subscribed
to its newsletter.
At the same time, technically said, Monoskop is primarily an internet website
and in this regard it is no different from any other communication media whose
function is to complicate interpersonal communication, at least due to the fact
that it is a medium with its own specific language, materiality, duration and access.
6

Contemporary media
Monoskop has began ten years ago in the milieu of a group of people running
a cultural space where they had organised events, workshops, discussion, a festival,
etc. Their expertise, if to call that way the trace left after years spent in the higher
education, varied well, and it spanned from fine art, architecture, philosophy,
through art history and literary theory, to library studies, cognitive science and
information technology. Each of us was obviously interested in these and other
fields other than his and her own, but the praxis in naming the substance whose
centripetal effects brought us into collaboration were the terms new media, media
culture and media art.
Notably, it was not contemporary art, because a constituent part of the praxis was
also non-visual expression, information media, etc., so the research began with the
essentially naive question ‘of what are we contemporary?’. There had been not
much written about media culture and art as such, a fact I perceived as drawback
but also as challenge.
The reflection, discussion and critique need to be grounded in reality, in a wider
context of the field, thus the research has began in-field. From the beginning, the
website of Monoskop served to record the environment, including people, groups,
organizations, events we had been in touch with and who/which were more or
less explicitly affiliated with media culture. The result of this is primarily a social
geography of live media culture and art, structured on the wiki into cities, with
a focus on the two recent decades.
Cities and agents
The first aim was to compile an overview of agents of this geography in their
wide variety, from eg. small independent and short-lived initiatives to established
museums. The focus on the 1990s and 2000s is of course problematic. One of
its qualities is a parallel to the history of the World Wide Web which goes back
precisely to the early 1990s and which is on the one hand the primary recording
medium of the Monoskop research and on the other a relevant self-archiving and–
stemming from its properties–presentation medium, in other words a platform on
which agents are not only meeting together but potentially influence one another
as well.
http://monoskop.org/Prague
The records are of diverse length and quality, while the priorities for what they
consist of can be generally summed up in several points in the following order:

7

1. Inclusion of a person, organisation or event in the context of the structure.
So in case of a festival or conference held in Prague the most important is to
mention it in the events section on the page on Prague.
2. Links to their web presence from inside their wiki pages, while it usually
implies their (self-)presentation.
http://monoskop.org/The_Media_Are_With_Us
3. Basic information, including a name or title in an original language, dates
of birth, foundation, realization, relations to other agents, ideally through
links inside the wiki. These are presented in narrative and in English.
4. Literature or bibliography in as many languages as possible, with links to
versions of texts online if there are any.
5. Biographical and other information relevant for the object of the research,
while the preference is for those appearing online for the first time.
6. Audiovisual material, works, especially those that cannot be found on linked
websites.
Even though pages are structured in the quasi same way, input fields are not structured, so when you create a wiki account and decide to edit or add an entry, the
wiki editor offers you merely one input box for the continuous text. As is the case
on other wiki websites. Better way to describe their format is thus articles.
There are many related questions about representation, research methodology,
openness and participation, formalization, etc., but I am not going to discuss them
due to the time constraint.
The first research layer thus consists of live and active agents, relations among
them and with them.
Countries
Another layer is related to a question about what does the field of media culture
and art stem from; what and upon what does it consciously, but also not fully
consciously, builds, comments, relates, negates; in other words of what it may be
perceived a post, meta, anti, retro, quasi and neo legacy.
An approach of national histories of art of the 20th century proved itself to be
relevant here. These entries are structured in the same way like cities: people,
groups, events, literature, at the same time building upon historical art forms and
periods as they are reflected in a range of literature.
8

http://monoskop.org/Czech_Republic
The overviews are organised purposely without any attempts for making relations
to the present more explicit, in order to leave open a wide range of intepretations
and connotations and to encourage them at the same time.
The focus on art of the 20th century originally related to, while the researched
countries were mostly of central and eastern Europe, with foundations of modern
national states, formations preserving this field in archives, museums, collections
but also publications, etc. Obviously I am not saying that contemporary media
culture is necessarily archived on the web while art of the 20th century lies in
collections “offline”, it applies vice versa as well.
In this way there began to appear new articles about filmmakers, fine artists, theorists and other partakers in artistic life of the previous century.
Since then the focus has considerably expanded to more than a century of art and
new media on the whole continent. Still it portrays merely another layer of the
research, the one which is yet a collection of fragmentary data, without much
context. Soon we also hit the limit of what is about this field online. The next
question was how to work in the internet environment with printed sources.
Log
http://monoskop.org/log
When I was installing this blog five years ago I treated it as a side project, an offshoot, which by the fact of being online may not be only an archive of selected
source literature for the Monoskop research but also a resource for others, mainly
students in the humanities. A few months later I found Aaaarg, then oriented
mainly on critical theory and philosophy; there was also Gigapedia with publications without thematic orientation; and several other community library portals
on password. These were the first sources where I was finding relevant literature
in electronic version, later on there were others too, I began to scan books and catalogues myself and to receive a large number of scans by email and soon came to
realise that every new entry is an event of its own not only for myself. According
to the response, the website has a wide usership across all the continents.
At this point it is proper to mention the copyright. When deciding about whether
to include this or that publication, there are at least two moments always present.
One brings me back to my local library at the outskirts of Bratislava in the early
1990s and asks that if I would have found this book there and then, could it change
my life? Because books that did I was given only later and elsewhere; and here I
think of people sitting behind computers in Belarus, China or Kongo. And even
9

if not, the latter is a wonder on whether this text has a potential to open up some
serious questions about disciplinarity or national discursivity in the humanities,
while here I am reminded by a recent study which claims that more than half
of academic publications are not read by more than three people: their author,
reviewer and editor. What does not imply that it is necessary to promote them
to more people but rather to think of reasons why is it so. It seems that the
consequences of the combination of high selectivity with open access resonate
also with publishers and authors from whom the complaints are rather scarce and
even if sometimes I don’t understand reasons of those received, I respect them.
Media technology
Throughout the years I came to learn, from the ontological perspective, two main
findings about media and technology.
For a long time I had a tendency to treat technologies as objects, things, while now
it seems much more productive to see them as processes, techniques. As indeed
nor the biologist does speak about the dear as biology. In this sense technology is
the science of techniques, including cultural techniques which span from reading,
writing and counting to painting, programming and publishing.
Media in the humanities are a compound of two long unrelated histories. One of
them treats media as a means of communication, signals sent from point A to the
point B, lacking the context and meaning. Another speaks about media as artistic
means of expression, such as the painting, sculpture, poetry, theatre, music or
film. The term “media art” is emblematic for this amalgam while the historical
awareness of these two threads sheds new light on it.
Media technology in art and the humanities continues to be the primary object of
research of Monoskop.
I attempted to comment on political, esthetic and technical aspects of publishing.
Let me finish by saying that Monoskop is an initiative open to people and future
and you are more than welcome to take part in it.

Dušan Barok
Written May 1-7, 2014, in Bergen and Prague. Translated by the author on May 10-13,
2014. This version generated June 10, 2014.


Dockray
Interface Access Loss
2013


Interface Access Loss

I want to begin this talk at the end -- by which I mean the end of property - at least according to
the cyber-utopian account of things, where digital file sharing and online communication liberate
culture from corporations and their drive for profit. This is just one of the promised forms of
emancipation -- property, in a sense, was undone. People, on a massive scale, used their
computers and their internet connections to share digitized versions of their objects with each
other, quickly producing a different, common form of ownership. The crisis that this provoked is
well-known -- it could be described in one word: Napster. What is less recognized - because it is
still very much in process - is the subsequent undoing of property, of both the private and common
kind. What follows is one story of "the cloud" -- the post-dot-com bubble techno-super-entity -which sucks up property, labor, and free time.

Object, Interface

It's debated whether the growing automation of production leads to global structural
unemployment or not -- Karl Marx wrote that "the self-expansion of capital by means of machinery
is thenceforward directly proportional to the number of the workpeople, whose means of
livelihood have been destroyed by that machinery" - but the promise is, of course, that when
robots do the work, we humans are free to be creative. Karl Kautsky predicted that increasing
automation would actually lead, not to a mass surplus population or widespread creativity, but
something much more mundane: the growth of clerks and bookkeepers, and the expansion of
unproductive sectors like "the banking system, the credit system, insurance empires and
advertising."

Marx was analyzing the number of people employed by some of the new industries in the middle
of the 19th century: "gas-works, telegraphy, photography, steam navigation, and railways." The
facts were that these industries were incredibly important, expansive and growing, highly
mechanized.. and employed a very small number of people. It is difficult not to read his study of
these technologies of connection and communication - against the background of our present
moment, in which the rise of the Internet has been accompanied by the deindustrialization of
cities, increased migrant and mobile labor, and jobs made obsolete by computation.

There are obvious examples of the impact of computation on the workplace: at factories and
distribution centers, robots engineered with computer-vision can replace a handful of workers,
with a savings of millions of dollars per robot over the life of the system. And there are less
apparent examples as well, like algorithms determining when and where to hire people and for
how long, according to fluctuating conditions.
Both examples have parallels within computer programming, namely reuse and garbage
collection. Code reuse refers to the practice of writing software in such a way that the code can be
used again later, in another program, to perform the same task. It is considered wasteful to give the
same time, attention, and energy to a function, because the development environment is not an
assembly line - a programmer shouldn't repeat. Such repetition then gives way to copy-andpasting (or merely calling). The analogy here is to the robot, to the replacement of human labor
with technology.

Now, when a program is in the midst of being executed, the computer's memory fills with data -but some of that is obsolete, no longer necessary for that program to run. If left alone, the memory
would become clogged, the program would crash, the computer might crash. It is the role of the
garbage collector to free up memory, deleting what is no longer in use. And here, I'm making the
analogy with flexible labor, workers being made redundant, and so on.

In Object-Oriented Programming, a programmer designs the software that she is writing around
“objects,” where each object is conceptually divided into “public” and “private” parts. The public
parts are accessible to other objects, but the private ones are hidden to the world outside the
boundaries of that object. It's a “black box” - a thing that can be known through its inputs and
outputs - even in total ignorance of its internal mechanisms. What difference does it make if the
code is written in one way versus an other .. if it behaves the same? As William James wrote, “If no
practical difference whatever can be traced, then the alternatives mean practically the same thing,
and all dispute is idle.”

By merely having a public interface, an object is already a social entity. It makes no sense to even
provide access to the outside if there are no potential objects with which to interact! So to

understand the object-oriented program, we must scale up - not by increasing the size or
complexity of the object, but instead by increasing the number and types of objects such that their
relations become more dense. The result is an intricate machine with an on and an off state, rather
than a beginning and an end. Its parts are interchangeable -- provided that they reliably produce
the same behavior, the same inputs and outputs. Furthermore, this machine can be modified:
objects can be added and removed, changing but not destroying the machine; and it might be,
using Gerald Raunig’s appropriate term, “concatenated” with other machines.

Inevitably, this paradigm for describing the relationship between software objects spread outwards,
subsuming more of the universe outside of the immediate code. External programs, powerful
computers, banking institutions, people, and satellites have all been “encapsulated” and
“abstracted” into objects with inputs and outputs. Is this a conceptual reduction of the richness
and complexity of reality? Yes, but only partially. It is also a real description of how people,
institutions, software, and things are being brought into relationship with one another according to
the demands of networked computation.. and the expanding field of objects are exactly those
entities integrated into such a network.

Consider a simple example of decentralized file-sharing: its diagram might represent an objectoriented piece of software, but here each object is a person-computer, shown in potential relation
to every other person-computer. Files might be sent or received at any point in this machine,
which seems particularly oriented towards circulation and movement. Much remains private, but a
collection of files from every person is made public and opened up to the network. Taken as a
whole, the entire collection of all files - which on the one hand exceeds the storage capacity of
any one person’s technical hardware, is on the other hand entirely available to every personcomputer. If the files were books.. then this collective collection would be a public library.

In order for a system like this to work, for the inputs and the outputs to actually engage with one
another to produce action or transmit data, there needs to be something in place already to enable
meaningful couplings. Before there is any interaction or any relationship, there must be some
common ground in place that allows heterogenous objects to ‘talk to each other’ (to use a phrase
from the business casual language of the Californian Ideology). The term used for such a common
ground - especially on the Internet - is platform, a word for that which enables and anticipates

future action without directly producing it. A platform provides tools and resources to the objects
that run “on top” of the platform so that those objects don't need to have their own tools and
resources. In this sense, the platform offers itself as a way for objects to externalize (and reuse)
labor. Communication between objects is one of the most significant actions that a platform can
provide, but it requires that the objects conform some amount of their inputs and outputs to the
specifications dictated by the platform.

But haven’t I only introduced another coupling, instead of between two objects, this time between
the object and the platform? What I'm talking about with "couplings" is the meeting point between
things - in other words, an “interface.” In the terms of OOP, the interface is an abstraction that
defines what kinds of interaction are possible with an object. It maps out the public face of the
object in a way that is legible and accessible to other objects. Similarly, computer interfaces like
screens and keyboards are designed to meet with human interfaces like fingers and eyes, allowing
for a specific form of interaction between person and machine. Any coupling between objects
passes through some interface and every interface obscures as much as it reveals - it establishes
the boundary between what is public and what is private, what is visible and what is not. The
dominant aesthetic values of user interface design actually privilege such concealment as “good
design,” appealing to principles of simplicity, cleanliness, and clarity.
Cloud, Access

One practical outcome of this has been that there can be tectonic shifts behind the interface where entire systems are restructured or revolutionized - without any interruption, as long as the
interface itself remains essentially unchanged. In Pragmatism’s terms, a successful interface keeps
any difference (in back) from making a difference (in front). Using books again as an example: for
consumers to become accustomed to the initial discomfort of purchasing a product online instead
of from a shop, the interface needs to make it so that “buying a book” is something that could be
interchangeably accomplished either by a traditional bookstore or the online "marketplace"
equivalent. But behind the interface is Amazon, which through low prices and wide selection is
the most visible platform for buying books and uses that position to push retailers and publishers
both to, at best, the bare minimum of profitability.

In addition to selling things to people and collecting data about its users (what they look at and
what they buy) to personalize product recommendations, Amazon has also made an effort to be a
platform for the technical and logistical parts of other retailers. Ultimately collecting data from
them as well, Amazon realizes a competitive advantage from having a comprehensive, up-to-theminute perspective on market trends and inventories. This volume of data is so vast and valuable
that warehouses packed with computers are constructed to store it, protect it, and make it readily
available to algorithms. Data centers, such as these, organize how commodities circulate (they run
business applications, store data about retail, manage fulfillment) but also - increasingly - they
hold the commodity itself - for example, the book. Digital book sales started the millennium very
slowly but by 2010 had overtaken hardcover sales.

Amazon’s store of digital books (or Apple’s or Google’s, for that matter) is a distorted reflection of
the collection circulating within the file-sharing network, displaced from personal computers to
corporate data centers. Here are two regimes of digital property: the swarm and the cloud. For
swarms (a reference to swarm downloading where a single file can be downloaded in parallel
from multiple sources) property is held in common between peers -- however, property is
positioned out of reach, on the cloud, accessible only through an interface that has absorbed legal
and business requirements.

It's just half of the story, however, to associate the cloud with mammoth data centers; the other
half is to be found in our hands and laps. Thin computing, including tablets and e-readers, iPads
and Kindles, and mobile phones have co-evolved with data centers, offering powerful, lightweight
computing precisely because so much processing and storage has been externalized.

In this technical configuration of the cloud, the thin computer and the fat data center meet through
an interface, inevitably clean and simple, that manages access to the remote resources. Typically,
a person needs to agree to certain “terms of service,” have a unique, measurable account, and
provide payment information; in return, access is granted. This access is not ownership in the
conventional sense of a book, or even the digital sense of a file, but rather a license that gives the
person a “non-exclusive right to keep a permanent copy… solely for your personal and noncommercial use,” contradicting the First Sale Doctrine, which gives the “owner” the right to sell,
lease, or rent their copy to anyone they choose at any price they choose. The doctrine,

established within America's legal system in 1908, separated the rights of reproduction, from
distribution, as a way to "exhaust" the copyright holder's control over the commodities that people
purchased.. legitimizing institutions like used book stores and public libraries. Computer software
famously attempted to bypass the First Sale Doctrine with its "shrink wrap" licenses that restricted
the rights of the buyer once she broke through the plastic packaging to open the product. This
practice has only evolved and become ubiquitous over the last three decades as software began
being distributed digitally through networks rather than as physical objects in stores. Such
contradictions are symptoms of the shift in property regimes, or what Jeremy Rifkin called “the age
of access.” He writes that “property continues to exist but is far less likely to be exchanged in
markets. Instead, suppliers hold on to property in the new economy and lease, rent, or charge an
admission fee, subscription, or membership dues for its short-term use.”

Thinking again of books, Rifkin’s description gives the image of a paid library emerging as the
synthesis of the public library and the marketplace for commodity exchange. Considering how, on
the one side, traditional public libraries are having their collections deaccessioned, hours of
operation cut, and are in some cases being closed down entirely, and on the other side, the
traditional publishing industry finds its stores, books, and profits dematerialized, the image is
perhaps appropriate. Server racks, in photographs inside data centers, strike an eerie resemblance
to library stacks - - while e-readers are consciously designed to look and feel something like a
book. Yet, when one peers down into the screen of the device, one sees both the book - and the
library.

Like a Facebook account, which must uniquely correspond to a real person, the e-reader is an
individualizing device. It is the object that establishes trusted access with books stored in the cloud
and ensures that each and every person purchases their own rights to read each book. The only
transfer that is allowed is of the device itself, which is the thing that a person actually does own.
But even then, such an act must be reported back to the cloud: the hardware needs to be deregistered and then re-registered with credit card and authentication details about the new owner.

This is no library - or it's only a library in the most impoverished sense of the word. It is a new
enclosure, and it is a familiar story: things in the world (from letters, to photographs, to albums, to
books) are digitized (as emails, JPEGs, MP3s, and PDFs) and subsequently migrate to a remote

location or service (Gmail, Facebook, iTunes, Kindle Store). The middle phase is the biggest
disruption, when the interface does the poorest job concealing the material transformations taking
place, when the work involved in creating those transformations is most apparent, often because
the person themselves is deeply involved in the process (of ripping vinyl, for instance). In the third
phase, the user interface becomes easier, more “frictionless,” and what appears to be just another
application or folder on one’s computer is an engorged, property-and-energy-hungry warehouse a
thousand miles away.

Capture, Loss

Intellectual property's enclosure is easy enough to imagine in warehouses of remote, secure hard
drives. But the cloud internalizes processing as well as storage, capturing the new forms of cooperation and collaboration characterizing the new economy and its immaterial labor. Social
relations are transmuted into database relations on the "social web," which absorbs selforganization as well. Because of this, the cloud impacts as strongly on the production of
publications, as on their consumption, in the tradition sense.

Storage, applications, and services offered in the cloud are marketed for consumption by authors
and publishers alike. Document editing, project management, and accounting are peeled slowly
away from the office staff and personal computers into the data centers; interfaces are established
into various publication channels from print on demand to digital book platforms. In the fully
realized vision of cloud publishing, the entire technical and logistical apparatus is externalized,
leaving only the human labor.. and their thin devices remaining. Little distinguishes the authorobject from the editor-object from the reader-object. All of them.. maintain their position in the
network by paying for lightweight computers and their updates, cloud services, and broadband
internet connections.
On the production side of the book, the promise of the cloud is a recovery of the profits “lost” to
file-sharing, as all that exchange is disciplined, standardized and measured. Consumers are finally
promised the access to the history of human knowledge that they had already improvised by
themselves, but now without the omnipresent threat of legal prosecution. One has the sneaking
suspicion though.. that such a compromise is as hollow.. as the promises to a desperate city of the

jobs that will be created in a new constructed data center - - and that pitting “food on the table”
against “access to knowledge” is both a distraction from and a legitimation of the forms of power
emerging in the cloud. It's a distraction because it's by policing access to knowledge that the
middle-man platform can extract value from publication, both on the writing and reading sides of
the book; and it's a legitimation because the platform poses itself as the only entity that can resolve
the contradiction between the two sides.

When the platform recedes behind the interface, these two sides are the the most visible
antagonism - in a tug-of-war with each other - - yet neither the “producers” nor the “consumers” of
publications are becoming more wealthy, or working less to survive. If we turn the picture
sideways, however, a new contradiction emerges, between the indebted, living labor - of authors,
editors, translators, and readers - on one side, and on the other.. data centers, semiconductors,
mobile technology, expropriated software, power companies, and intellectual property.
The talk in the data center industry of the “industrialization” of the cloud refers to the scientific
approach to improving design, efficiency, and performance. But the term also recalls the basic
narrative of the Industrial Revolution: the movement from home-based manufacturing by hand to
large-scale production in factories. As desktop computers pass into obsolescence, we shift from a
networked, but small-scale, relationship to computation (think of “home publishing”) to a
reorganized form of production that puts the accumulated energy of millions to work through
these cloud companies and their modernized data centers.

What kind of buildings are these blank superstructures? Factories for the 21st century? An engineer
named Ken Patchett described the Facebook data center that way in a television interview, “This is
a factory. It’s just a different kind of factory than you might be used to.” Those factories that we’re
“used to,” continue to exist (at Foxconn, for instance) producing the infrastructure, under
recognizably exploitative conditions, for a “different kind of factory,” - a factory that extends far
beyond the walls of the data center.

But the idea of the factory is only part of the picture - this building is also a mine.. and the
dispersed workforce devote most of their waking hours to mining-in-reverse, packing it full of data,
under the expectation that someone - soon - will figure out how to pull out something valuable.

Both metaphors rely on the image of a mass of workers (dispersed as it may be) and leave a darker
and more difficult possibility: the data center is like the hydroelectric plant, damming up property,
sociality, creativity and knowledge, while engineers and financiers look for the algorithms to
release the accumulated cultural and social resources on demand, as profit.

This returns us to the interface, site of the struggles over the management and control of access to
property and infrastructure. Previously, these struggles were situated within the computer-object
and the implied freedom provided by its computation, storage, and possibilities for connection
with others. Now, however, the eviscerated device is more interface than object, and it is exactly
here at the interface that the new technological enclosures have taken form (for example, see
Apple's iOS products, Google's search box, and Amazon's "marketplace"). Control over the
interface is guaranteed by control over the entire techno-business stack: the distributed hardware
devices, centralized data centers, and the software that mediates the space between. Every major
technology corporation must now operate on all levels to protect against any loss.

There is a centripetal force to the cloud and this essay has been written in its irresistible pull. In
spite of the sheer mass of capital that is organized to produce this gravity and the seeming
insurmountability of it all, there is no chance that the system will absolutely manage and control
the noise within it. Riots break out on the factory floor; algorithmic trading wreaks havoc on the
stock market in an instant; data centers go offline; 100 million Facebook accounts are discovered
to be fake; the list will go on. These cracks in the interface don't point to any possible future, or
any desirable one, but they do draw attention to openings that might circumvent the logic of
access.

"What happens from there is another question." This is where I left things off in the text when I
finished it a year ago. It's a disappointing ending: we just have to invent ways of occupying the
destruction, violence and collapse that emerge out of economic inequality, global warming,
dismantled social welfare, and so on. And there's not much that's happened since then to make us
very optimistic - maybe here I only have to mention the NSA. But as I began with an ending, I
really should end at a beginning.
I think we were obliged to adopt a negative, critical position in response the cyber-utopianism of

the last almost 20 years, whether in its naive or cynical forms. We had to identify and theorize the
darker side of things. But it can become habitual, and when the dark side materializes, as it has
over the past few years - so that everyone knows the truth - then the obligation flips around,
doesn't it? To break out of habitual criticism as the tacit, defeated acceptance of what is. But, what
could be? Where do we find new political imaginaries? Not to ask what is the bright side, or what
can we do to cope, but what are the genuinely emancipatory possibilities that are somehow still
latent, buried under the present - or emerging within those ruptures in it? - - - I can't make it all
the way to a happy ending, to a happy beginning, but at least it's a beginning and not the end.

 

Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.