Barok
Techniques of Publishing
2014


Techniques of Publishing

Draft translation of a talk given at the seminar Informace mezi komoditou a komunitou [The Information Between Commodity and Community] held at Tranzitdisplay in Prague, Czech Republic, on May 6, 2014

My contribution has three parts. I will begin by sketching the current environment of publishing in general, move on to some of the specificities of publishing
in the humanities and art, and end with a brief introduction to the Monoskop
initiative I was asked to include in my talk.
I would like to thank Milos Vojtechovsky, Matej Strnad and CAS/FAMU for
the invitation, and Tranzitdisplay for hosting this seminar. It offers itself as an
opportunity for reflection for which there is a decent distance from a previous
presentation of Monoskop in Prague eight years ago when I took part in a new
media education workshop prepared by Miloš and Denisa Kera. Many things
changed since then, not only in new media, but in the humanities in general,
and I will try to articulate some of these changes from today’s perspective and
primarily from the perspective of publishing.

I. The Environment of Publishing
One change, perhaps the most serious, and which indeed relates to the humanities
publishing as well, is that from a subject that was just a year ago treated as a paranoia of a bunch of so called technological enthusiasts, is today a fact with which
the global public is well acquainted: we are all being surveilled. Virtually every
utterance on the internet, or rather made by means of the equipment connected
to it through standard protocols, is recorded, in encrypted or unencrypted form,
on servers of information agencies, besides copies of a striking share of these data
on servers of private companies. We are only at the beginning of civil mobilization towards reversal of the situation and the future is open, yet nothing suggests
so far that there is any real alternative other than “to demand the impossible.”
There are at least two certaintes today: surveillance is a feature of every communication technology controlled by third parties, from post, telegraphy, telephony
to internet; and at the same time it is also a feature of the ruling power in all its
variants humankind has come to know. In this regard, democracy can be also understood as the involvement of its participants in deciding on the scale and use of
information collected in this way.
I mention this because it suggests that also all publishing initiatives, from libraries,
through archives, publishing houses to schools have their online activities, back1

ends, shared documents and email communication recorded by public institutions–
which intelligence agencies are, or at least ought to be.
In regard to publishing houses it is notable that books and other publications today are printed from digital files, and are delivered to print over email, thus it is
not surprising to claim that a significant amount of electronically prepared publications is stored on servers in the public service. This means that besides being
required to send a number of printed copies to their national libraries, in fact,
publishers send their electronic versions to information agencies as well. Obviously, agencies couldn’t care less about them, but it doesn’t change anything on
the likely fact that, whatever it means, the world’s largest electronic repository of
publications today are the server farms of the NSA.
Information agencies archive publications without approval, perhaps without awareness, and indeed despite disapproval of their authors and publishers, as an
“incidental” effect of their surveillance techniques. This situation is obviously
radically different from a totalitarianism we got to know. Even though secret
agencies in the Eastern Bloc were blackmailing people to produce miserable literature as their agents, samizdat publications could at least theoretically escape their
attention.
This is not the only difference. While captured samizdats were read by agents of
flesh and blood, publications collected through the internet surveillance are “read”
by software agents. Both of them scan texts for “signals”, ie. terms and phrases
whose occurrences trigger interpretative mechanisms that control operative components of their organizations.
Today, publishing is similarly political and from the point of view of power a potentially subversive activity like it was in the communist Czechoslovakia. The
difference is its scale, reach and technique.
One of the messages of the recent “revelations” is that while it is recommended
to encrypt private communication, the internet is for its users also a medium of
direct contact with power. SEO, or search engine optimization, is now as relevant technique for websites as for books and other publications since all of them
are read by similar algorithms, and authors can read this situation as a political
dimension of their work, as a challenge to transform and model these algorithms
by texts.

2

II. Techniques of research in the humanities literature
Compiling the bibliography
Through the circuitry we got to the audience, readers. Today, they also include
software and algorithms such as those used for “reading” by information agencies
and corporations, and others facilitating reading for the so called ordinary reader,
the reader searching information online, but also the “expert” reader, searching
primarily in library systems.
Libraries, as we said, are different from information agencies in that they are
funded by the public not to hide publications from it but to provide access to
them. A telling paradox of the age is that on the one hand information agencies
are storing almost all contemporary book production in its electronic version,
while generally they absolutely don’t care about them since the “signal” information lies elsewhere, and on the other in order to provide electronic access, paid or
direct, libraries have to costly scan also publications that were prepared for print
electronically.
A more remarkable difference is, of course, that libraries select and catalogize
publications.
Their methods of selection are determined in the first place by their public institutional function of the protector and projector of patriotic values, and it is reflected
in their preference of domestic literature, ie. literature written in official state languages. Methods of catalogization, on the other hand, are characterized by sorting
by bibliographic records, particularly by categories of disciplines ordered in the
tree structure of knowledge. This results in libraries shaping the research, including academic research, towards a discursivity that is national and disciplinary, or
focused on the oeuvre of particular author.
Digitizing catalogue records and allowing readers to search library indexes by their
structural items, ie. the author, publisher, place and year of publication, words in
title, and disciplines, does not at all revert this tendency, but rather extends it to
the web as well.
I do not intend to underestimate the value and benefits of library work, nor the
importance of discipline-centered writing or of the recognition of the oeuvre of
the author. But consider an author working on an article who in the early phase
of his research needs to prepare a bibliography on the activity of Fluxus in central Europe or on the use of documentary film in education. Such research cuts
through national boundaries and/or branches of disciplines and he is left to travel
not only to locate artefacts, protagonists and experts in the field but also to find
literature, which in turn makes even the mere process of compiling bibliography
relatively demanding and costly activity.
3

In this sense, the digitization of publications and archival material, providing their
free online access and enabling fulltext search, in other words “open access”, catalyzes research across political-geographical and disciplinary configurations. Because while the index of the printed book contains only selected terms and for
the purposes of searching the index across several books the researcher has to have
them all at hand, the software-enabled search in digitized texts (with a good OCR)
works with the index of every single term in all of them.
This kind of research also obviously benefits from online translation tools, multilingual case bibliographies online, as well as second hand bookstores and small
specialized libraries that provide a corrective role to public ones, and whose “open
access” potential has been explored to the very small extent until now, but which
I won’t discuss here further for the lack of time.
Writing
The disciplinarity and patriotism are “embedded” in texts themselves, while I repeat that I don’t say this in a pejorative way.
Bibliographic records in bodies of texts, notes, attributions of sources and appended references can be read as formatted addresses of other texts, making apparent a kind of intertextual structure, well known in hypertext documents. However, for the reader these references are still “virtual”. When following a reference
she is led back to a library, and if interested in more references, to more libraries.
Instead, authors assume certain general erudition of their readers, while following references to their very sources is perceived as an exception from the standard
self-limitation to reading only the body of the text. Techniques of writing with
virtual bibliography thus affirm national-disciplinary discourses and form readers
and authors proficient in the field of references set by collections of local libraries
and so called standard literature of fields they became familiar with during their
studies.
When in this regime of writing someone in the Czech Republic wants to refer to
the work of Gilbert Simondon or Alexander Bogdanov, to give an example, the
effect of his work will be minimal, since there was practically nothing from these
authors translated into Czech. His closely reading colleague is left to try ordering
books through a library and wait for 3-4 weeks, or to order them from an online
store, travel to find them or search for them online. This applies, in the case of
these authors, for readers in the vast majority of countries worldwide. And we can
tell with certainty that this is not only the case of Simondon and Bogdanov but
of the vast majority of authors. Libraries as nationally and pyramidally situated
institutions face real challenges in regard to the needs of free research.
This is surely merely one aspect of techniques of writing.
4

Reading
Reading texts with “live” references and bibliographies using electronic devices is
today possible not only to imagine but to realise as well. This way of reading
allows following references to other texts, visual material, other related texts of
an author, but also working with occurrences of words in the text, etc., bringing
reading closer to textual analysis and other interesting levels. Due to the time
limits I am going to sketch only one example.
Linear reading is specific by reading from the beginning of the text to its end,
as well as ‘tree-like’ reading through the content structure of the document, and
through occurrences of indexed words. Still, techniques of close reading extend
its other aspect – ‘moving’ through bibliographic references in the document to
particular pages or passages in another. They make the virtual reference plastic –
texts are separated one from another merely by a click or a tap.
We are well familiar with a similar movement through the content on the web
– surfing, browsing, and clicking through. This leads us to an interesting parallel: standards of structuring, composing, etc., of texts in the humanities has been
evolving for centuries, what is incomparably more to decades of the web. From
this stems also one of the historical challenges the humanities are facing today:
how to attune to the existence of the web and most importantly to epistemological consequences of its irreversible social penetration. To upload a PDF online is
only a taste of changes in how we gain and make knowledge and how we know.
This applies both ways – what is at stake is not only making production of the
humanities “available” online, it is not only about open access, but also about the
ways of how the humanities realise the electronic and technical reality of their
own production, in regard to the research, writing, reading, and publishing.
Publishing
The analogy between information agencies and national libraries also points to
the fact that large portion of publications, particularly those created in software,
is electronic. However the exceptions are significant. They include works made,
typeset, illustrated and copied manually, such as manuscripts written on paper
or other media, by hand or using a typewriter or other mechanic means, and
other pre-digital techniques such as lithography, offset, etc., or various forms of
writing such as clay tablets, rolls, codices, in other words the history of print and
publishing in its striking variety, all of which provide authors and publishers with
heterogenous means of expression. Although this “segment” is today generally
perceived as artists’ books interesting primarily for collectors, the current process
of massive digitization has triggered the revival, comebacks, transformations and
5

novel approaches to publishing. And it is these publications whose nature is closer
to the label ‘book’ rather than the automated electro-chemical version of the offset
lithography of digital files on acid-free paper.
Despite that it is remarkable to observe a view spreading among publishers that
books created in software are books with attributes we have known for ages. On
top of that there is a tendency to handle files such as PDFs, EPUBs, MOBIs and
others as if they are printed books, even subject to the rules of limited edition, a
consequence of what can be found in the rise of so called electronic libraries that
“borrow” PDF files and while someone reads one, other users are left to wait in
the line.
Whilst, from today’s point of view of the humanities research, mass-printed books
are in the first place archives of the cultural content preserved in this way for the
time we run out of electricity or have the internet ‘switched off’ in some other
way.

III. Monoskop
Finally, I am getting to Monoskop and to begin with I am going to try to formulate
its brief definition, in three versions.
From the point of view of the humanities, Monoskop is a research, or questioning, whose object’s nature renders no answer as definite, since the object includes
art and culture in their widest sense, from folk music, through visual poetry to
experimental film, and namely their history as well as theory and techniques. The
research is framed by the means of recording itself, what makes it a practise whose
record is an expression with aesthetic qualities, what in turn means that the process of the research is subject to creative decisions whose outcomes are perceived
esthetically as well.
In the language of cultural management Monoskop is an independent research
project whose aim is subject to change according to its continual findings; which
has no legal body and thus as organisation it does not apply for funding; its participants have no set roles; and notably, it operates with no deadlines. It has a reach
to the global public about which, respecting the privacy of internet users, there
are no statistics other than general statistics on its social networks channels and a
figure of numbers of people and bots who registered on its website and subscribed
to its newsletter.
At the same time, technically said, Monoskop is primarily an internet website
and in this regard it is no different from any other communication media whose
function is to complicate interpersonal communication, at least due to the fact
that it is a medium with its own specific language, materiality, duration and access.
6

Contemporary media
Monoskop has began ten years ago in the milieu of a group of people running
a cultural space where they had organised events, workshops, discussion, a festival,
etc. Their expertise, if to call that way the trace left after years spent in the higher
education, varied well, and it spanned from fine art, architecture, philosophy,
through art history and literary theory, to library studies, cognitive science and
information technology. Each of us was obviously interested in these and other
fields other than his and her own, but the praxis in naming the substance whose
centripetal effects brought us into collaboration were the terms new media, media
culture and media art.
Notably, it was not contemporary art, because a constituent part of the praxis was
also non-visual expression, information media, etc., so the research began with the
essentially naive question ‘of what are we contemporary?’. There had been not
much written about media culture and art as such, a fact I perceived as drawback
but also as challenge.
The reflection, discussion and critique need to be grounded in reality, in a wider
context of the field, thus the research has began in-field. From the beginning, the
website of Monoskop served to record the environment, including people, groups,
organizations, events we had been in touch with and who/which were more or
less explicitly affiliated with media culture. The result of this is primarily a social
geography of live media culture and art, structured on the wiki into cities, with
a focus on the two recent decades.
Cities and agents
The first aim was to compile an overview of agents of this geography in their
wide variety, from eg. small independent and short-lived initiatives to established
museums. The focus on the 1990s and 2000s is of course problematic. One of
its qualities is a parallel to the history of the World Wide Web which goes back
precisely to the early 1990s and which is on the one hand the primary recording
medium of the Monoskop research and on the other a relevant self-archiving and–
stemming from its properties–presentation medium, in other words a platform on
which agents are not only meeting together but potentially influence one another
as well.
http://monoskop.org/Prague
The records are of diverse length and quality, while the priorities for what they
consist of can be generally summed up in several points in the following order:

7

1. Inclusion of a person, organisation or event in the context of the structure.
So in case of a festival or conference held in Prague the most important is to
mention it in the events section on the page on Prague.
2. Links to their web presence from inside their wiki pages, while it usually
implies their (self-)presentation.
http://monoskop.org/The_Media_Are_With_Us
3. Basic information, including a name or title in an original language, dates
of birth, foundation, realization, relations to other agents, ideally through
links inside the wiki. These are presented in narrative and in English.
4. Literature or bibliography in as many languages as possible, with links to
versions of texts online if there are any.
5. Biographical and other information relevant for the object of the research,
while the preference is for those appearing online for the first time.
6. Audiovisual material, works, especially those that cannot be found on linked
websites.
Even though pages are structured in the quasi same way, input fields are not structured, so when you create a wiki account and decide to edit or add an entry, the
wiki editor offers you merely one input box for the continuous text. As is the case
on other wiki websites. Better way to describe their format is thus articles.
There are many related questions about representation, research methodology,
openness and participation, formalization, etc., but I am not going to discuss them
due to the time constraint.
The first research layer thus consists of live and active agents, relations among
them and with them.
Countries
Another layer is related to a question about what does the field of media culture
and art stem from; what and upon what does it consciously, but also not fully
consciously, builds, comments, relates, negates; in other words of what it may be
perceived a post, meta, anti, retro, quasi and neo legacy.
An approach of national histories of art of the 20th century proved itself to be
relevant here. These entries are structured in the same way like cities: people,
groups, events, literature, at the same time building upon historical art forms and
periods as they are reflected in a range of literature.
8

http://monoskop.org/Czech_Republic
The overviews are organised purposely without any attempts for making relations
to the present more explicit, in order to leave open a wide range of intepretations
and connotations and to encourage them at the same time.
The focus on art of the 20th century originally related to, while the researched
countries were mostly of central and eastern Europe, with foundations of modern
national states, formations preserving this field in archives, museums, collections
but also publications, etc. Obviously I am not saying that contemporary media
culture is necessarily archived on the web while art of the 20th century lies in
collections “offline”, it applies vice versa as well.
In this way there began to appear new articles about filmmakers, fine artists, theorists and other partakers in artistic life of the previous century.
Since then the focus has considerably expanded to more than a century of art and
new media on the whole continent. Still it portrays merely another layer of the
research, the one which is yet a collection of fragmentary data, without much
context. Soon we also hit the limit of what is about this field online. The next
question was how to work in the internet environment with printed sources.
Log
http://monoskop.org/log
When I was installing this blog five years ago I treated it as a side project, an offshoot, which by the fact of being online may not be only an archive of selected
source literature for the Monoskop research but also a resource for others, mainly
students in the humanities. A few months later I found Aaaarg, then oriented
mainly on critical theory and philosophy; there was also Gigapedia with publications without thematic orientation; and several other community library portals
on password. These were the first sources where I was finding relevant literature
in electronic version, later on there were others too, I began to scan books and catalogues myself and to receive a large number of scans by email and soon came to
realise that every new entry is an event of its own not only for myself. According
to the response, the website has a wide usership across all the continents.
At this point it is proper to mention the copyright. When deciding about whether
to include this or that publication, there are at least two moments always present.
One brings me back to my local library at the outskirts of Bratislava in the early
1990s and asks that if I would have found this book there and then, could it change
my life? Because books that did I was given only later and elsewhere; and here I
think of people sitting behind computers in Belarus, China or Kongo. And even
9

if not, the latter is a wonder on whether this text has a potential to open up some
serious questions about disciplinarity or national discursivity in the humanities,
while here I am reminded by a recent study which claims that more than half
of academic publications are not read by more than three people: their author,
reviewer and editor. What does not imply that it is necessary to promote them
to more people but rather to think of reasons why is it so. It seems that the
consequences of the combination of high selectivity with open access resonate
also with publishers and authors from whom the complaints are rather scarce and
even if sometimes I don’t understand reasons of those received, I respect them.
Media technology
Throughout the years I came to learn, from the ontological perspective, two main
findings about media and technology.
For a long time I had a tendency to treat technologies as objects, things, while now
it seems much more productive to see them as processes, techniques. As indeed
nor the biologist does speak about the dear as biology. In this sense technology is
the science of techniques, including cultural techniques which span from reading,
writing and counting to painting, programming and publishing.
Media in the humanities are a compound of two long unrelated histories. One of
them treats media as a means of communication, signals sent from point A to the
point B, lacking the context and meaning. Another speaks about media as artistic
means of expression, such as the painting, sculpture, poetry, theatre, music or
film. The term “media art” is emblematic for this amalgam while the historical
awareness of these two threads sheds new light on it.
Media technology in art and the humanities continues to be the primary object of
research of Monoskop.
I attempted to comment on political, esthetic and technical aspects of publishing.
Let me finish by saying that Monoskop is an initiative open to people and future
and you are more than welcome to take part in it.

Dušan Barok
Written May 1-7, 2014, in Bergen and Prague. Translated by the author on May 10-13,
2014. This version generated June 10, 2014.


Dockray
Interface Access Loss
2013


Interface Access Loss

I want to begin this talk at the end -- by which I mean the end of property - at least according to
the cyber-utopian account of things, where digital file sharing and online communication liberate
culture from corporations and their drive for profit. This is just one of the promised forms of
emancipation -- property, in a sense, was undone. People, on a massive scale, used their
computers and their internet connections to share digitized versions of their objects with each
other, quickly producing a different, common form of ownership. The crisis that this provoked is
well-known -- it could be described in one word: Napster. What is less recognized - because it is
still very much in process - is the subsequent undoing of property, of both the private and common
kind. What follows is one story of "the cloud" -- the post-dot-com bubble techno-super-entity -which sucks up property, labor, and free time.

Object, Interface

It's debated whether the growing automation of production leads to global structural
unemployment or not -- Karl Marx wrote that "the self-expansion of capital by means of machinery
is thenceforward directly proportional to the number of the workpeople, whose means of
livelihood have been destroyed by that machinery" - but the promise is, of course, that when
robots do the work, we humans are free to be creative. Karl Kautsky predicted that increasing
automation would actually lead, not to a mass surplus population or widespread creativity, but
something much more mundane: the growth of clerks and bookkeepers, and the expansion of
unproductive sectors like "the banking system, the credit system, insurance empires and
advertising."

Marx was analyzing the number of people employed by some of the new industries in the middle
of the 19th century: "gas-works, telegraphy, photography, steam navigation, and railways." The
facts were that these industries were incredibly important, expansive and growing, highly
mechanized.. and employed a very small number of people. It is difficult not to read his study of
these technologies of connection and communication - against the background of our present
moment, in which the rise of the Internet has been accompanied by the deindustrialization of
cities, increased migrant and mobile labor, and jobs made obsolete by computation.

There are obvious examples of the impact of computation on the workplace: at factories and
distribution centers, robots engineered with computer-vision can replace a handful of workers,
with a savings of millions of dollars per robot over the life of the system. And there are less
apparent examples as well, like algorithms determining when and where to hire people and for
how long, according to fluctuating conditions.
Both examples have parallels within computer programming, namely reuse and garbage
collection. Code reuse refers to the practice of writing software in such a way that the code can be
used again later, in another program, to perform the same task. It is considered wasteful to give the
same time, attention, and energy to a function, because the development environment is not an
assembly line - a programmer shouldn't repeat. Such repetition then gives way to copy-andpasting (or merely calling). The analogy here is to the robot, to the replacement of human labor
with technology.

Now, when a program is in the midst of being executed, the computer's memory fills with data -but some of that is obsolete, no longer necessary for that program to run. If left alone, the memory
would become clogged, the program would crash, the computer might crash. It is the role of the
garbage collector to free up memory, deleting what is no longer in use. And here, I'm making the
analogy with flexible labor, workers being made redundant, and so on.

In Object-Oriented Programming, a programmer designs the software that she is writing around
“objects,” where each object is conceptually divided into “public” and “private” parts. The public
parts are accessible to other objects, but the private ones are hidden to the world outside the
boundaries of that object. It's a “black box” - a thing that can be known through its inputs and
outputs - even in total ignorance of its internal mechanisms. What difference does it make if the
code is written in one way versus an other .. if it behaves the same? As William James wrote, “If no
practical difference whatever can be traced, then the alternatives mean practically the same thing,
and all dispute is idle.”

By merely having a public interface, an object is already a social entity. It makes no sense to even
provide access to the outside if there are no potential objects with which to interact! So to

understand the object-oriented program, we must scale up - not by increasing the size or
complexity of the object, but instead by increasing the number and types of objects such that their
relations become more dense. The result is an intricate machine with an on and an off state, rather
than a beginning and an end. Its parts are interchangeable -- provided that they reliably produce
the same behavior, the same inputs and outputs. Furthermore, this machine can be modified:
objects can be added and removed, changing but not destroying the machine; and it might be,
using Gerald Raunig’s appropriate term, “concatenated” with other machines.

Inevitably, this paradigm for describing the relationship between software objects spread outwards,
subsuming more of the universe outside of the immediate code. External programs, powerful
computers, banking institutions, people, and satellites have all been “encapsulated” and
“abstracted” into objects with inputs and outputs. Is this a conceptual reduction of the richness
and complexity of reality? Yes, but only partially. It is also a real description of how people,
institutions, software, and things are being brought into relationship with one another according to
the demands of networked computation.. and the expanding field of objects are exactly those
entities integrated into such a network.

Consider a simple example of decentralized file-sharing: its diagram might represent an objectoriented piece of software, but here each object is a person-computer, shown in potential relation
to every other person-computer. Files might be sent or received at any point in this machine,
which seems particularly oriented towards circulation and movement. Much remains private, but a
collection of files from every person is made public and opened up to the network. Taken as a
whole, the entire collection of all files - which on the one hand exceeds the storage capacity of
any one person’s technical hardware, is on the other hand entirely available to every personcomputer. If the files were books.. then this collective collection would be a public library.

In order for a system like this to work, for the inputs and the outputs to actually engage with one
another to produce action or transmit data, there needs to be something in place already to enable
meaningful couplings. Before there is any interaction or any relationship, there must be some
common ground in place that allows heterogenous objects to ‘talk to each other’ (to use a phrase
from the business casual language of the Californian Ideology). The term used for such a common
ground - especially on the Internet - is platform, a word for that which enables and anticipates

future action without directly producing it. A platform provides tools and resources to the objects
that run “on top” of the platform so that those objects don't need to have their own tools and
resources. In this sense, the platform offers itself as a way for objects to externalize (and reuse)
labor. Communication between objects is one of the most significant actions that a platform can
provide, but it requires that the objects conform some amount of their inputs and outputs to the
specifications dictated by the platform.

But haven’t I only introduced another coupling, instead of between two objects, this time between
the object and the platform? What I'm talking about with "couplings" is the meeting point between
things - in other words, an “interface.” In the terms of OOP, the interface is an abstraction that
defines what kinds of interaction are possible with an object. It maps out the public face of the
object in a way that is legible and accessible to other objects. Similarly, computer interfaces like
screens and keyboards are designed to meet with human interfaces like fingers and eyes, allowing
for a specific form of interaction between person and machine. Any coupling between objects
passes through some interface and every interface obscures as much as it reveals - it establishes
the boundary between what is public and what is private, what is visible and what is not. The
dominant aesthetic values of user interface design actually privilege such concealment as “good
design,” appealing to principles of simplicity, cleanliness, and clarity.
Cloud, Access

One practical outcome of this has been that there can be tectonic shifts behind the interface where entire systems are restructured or revolutionized - without any interruption, as long as the
interface itself remains essentially unchanged. In Pragmatism’s terms, a successful interface keeps
any difference (in back) from making a difference (in front). Using books again as an example: for
consumers to become accustomed to the initial discomfort of purchasing a product online instead
of from a shop, the interface needs to make it so that “buying a book” is something that could be
interchangeably accomplished either by a traditional bookstore or the online "marketplace"
equivalent. But behind the interface is Amazon, which through low prices and wide selection is
the most visible platform for buying books and uses that position to push retailers and publishers
both to, at best, the bare minimum of profitability.

In addition to selling things to people and collecting data about its users (what they look at and
what they buy) to personalize product recommendations, Amazon has also made an effort to be a
platform for the technical and logistical parts of other retailers. Ultimately collecting data from
them as well, Amazon realizes a competitive advantage from having a comprehensive, up-to-theminute perspective on market trends and inventories. This volume of data is so vast and valuable
that warehouses packed with computers are constructed to store it, protect it, and make it readily
available to algorithms. Data centers, such as these, organize how commodities circulate (they run
business applications, store data about retail, manage fulfillment) but also - increasingly - they
hold the commodity itself - for example, the book. Digital book sales started the millennium very
slowly but by 2010 had overtaken hardcover sales.

Amazon’s store of digital books (or Apple’s or Google’s, for that matter) is a distorted reflection of
the collection circulating within the file-sharing network, displaced from personal computers to
corporate data centers. Here are two regimes of digital property: the swarm and the cloud. For
swarms (a reference to swarm downloading where a single file can be downloaded in parallel
from multiple sources) property is held in common between peers -- however, property is
positioned out of reach, on the cloud, accessible only through an interface that has absorbed legal
and business requirements.

It's just half of the story, however, to associate the cloud with mammoth data centers; the other
half is to be found in our hands and laps. Thin computing, including tablets and e-readers, iPads
and Kindles, and mobile phones have co-evolved with data centers, offering powerful, lightweight
computing precisely because so much processing and storage has been externalized.

In this technical configuration of the cloud, the thin computer and the fat data center meet through
an interface, inevitably clean and simple, that manages access to the remote resources. Typically,
a person needs to agree to certain “terms of service,” have a unique, measurable account, and
provide payment information; in return, access is granted. This access is not ownership in the
conventional sense of a book, or even the digital sense of a file, but rather a license that gives the
person a “non-exclusive right to keep a permanent copy… solely for your personal and noncommercial use,” contradicting the First Sale Doctrine, which gives the “owner” the right to sell,
lease, or rent their copy to anyone they choose at any price they choose. The doctrine,

established within America's legal system in 1908, separated the rights of reproduction, from
distribution, as a way to "exhaust" the copyright holder's control over the commodities that people
purchased.. legitimizing institutions like used book stores and public libraries. Computer software
famously attempted to bypass the First Sale Doctrine with its "shrink wrap" licenses that restricted
the rights of the buyer once she broke through the plastic packaging to open the product. This
practice has only evolved and become ubiquitous over the last three decades as software began
being distributed digitally through networks rather than as physical objects in stores. Such
contradictions are symptoms of the shift in property regimes, or what Jeremy Rifkin called “the age
of access.” He writes that “property continues to exist but is far less likely to be exchanged in
markets. Instead, suppliers hold on to property in the new economy and lease, rent, or charge an
admission fee, subscription, or membership dues for its short-term use.”

Thinking again of books, Rifkin’s description gives the image of a paid library emerging as the
synthesis of the public library and the marketplace for commodity exchange. Considering how, on
the one side, traditional public libraries are having their collections deaccessioned, hours of
operation cut, and are in some cases being closed down entirely, and on the other side, the
traditional publishing industry finds its stores, books, and profits dematerialized, the image is
perhaps appropriate. Server racks, in photographs inside data centers, strike an eerie resemblance
to library stacks - - while e-readers are consciously designed to look and feel something like a
book. Yet, when one peers down into the screen of the device, one sees both the book - and the
library.

Like a Facebook account, which must uniquely correspond to a real person, the e-reader is an
individualizing device. It is the object that establishes trusted access with books stored in the cloud
and ensures that each and every person purchases their own rights to read each book. The only
transfer that is allowed is of the device itself, which is the thing that a person actually does own.
But even then, such an act must be reported back to the cloud: the hardware needs to be deregistered and then re-registered with credit card and authentication details about the new owner.

This is no library - or it's only a library in the most impoverished sense of the word. It is a new
enclosure, and it is a familiar story: things in the world (from letters, to photographs, to albums, to
books) are digitized (as emails, JPEGs, MP3s, and PDFs) and subsequently migrate to a remote

location or service (Gmail, Facebook, iTunes, Kindle Store). The middle phase is the biggest
disruption, when the interface does the poorest job concealing the material transformations taking
place, when the work involved in creating those transformations is most apparent, often because
the person themselves is deeply involved in the process (of ripping vinyl, for instance). In the third
phase, the user interface becomes easier, more “frictionless,” and what appears to be just another
application or folder on one’s computer is an engorged, property-and-energy-hungry warehouse a
thousand miles away.

Capture, Loss

Intellectual property's enclosure is easy enough to imagine in warehouses of remote, secure hard
drives. But the cloud internalizes processing as well as storage, capturing the new forms of cooperation and collaboration characterizing the new economy and its immaterial labor. Social
relations are transmuted into database relations on the "social web," which absorbs selforganization as well. Because of this, the cloud impacts as strongly on the production of
publications, as on their consumption, in the tradition sense.

Storage, applications, and services offered in the cloud are marketed for consumption by authors
and publishers alike. Document editing, project management, and accounting are peeled slowly
away from the office staff and personal computers into the data centers; interfaces are established
into various publication channels from print on demand to digital book platforms. In the fully
realized vision of cloud publishing, the entire technical and logistical apparatus is externalized,
leaving only the human labor.. and their thin devices remaining. Little distinguishes the authorobject from the editor-object from the reader-object. All of them.. maintain their position in the
network by paying for lightweight computers and their updates, cloud services, and broadband
internet connections.
On the production side of the book, the promise of the cloud is a recovery of the profits “lost” to
file-sharing, as all that exchange is disciplined, standardized and measured. Consumers are finally
promised the access to the history of human knowledge that they had already improvised by
themselves, but now without the omnipresent threat of legal prosecution. One has the sneaking
suspicion though.. that such a compromise is as hollow.. as the promises to a desperate city of the

jobs that will be created in a new constructed data center - - and that pitting “food on the table”
against “access to knowledge” is both a distraction from and a legitimation of the forms of power
emerging in the cloud. It's a distraction because it's by policing access to knowledge that the
middle-man platform can extract value from publication, both on the writing and reading sides of
the book; and it's a legitimation because the platform poses itself as the only entity that can resolve
the contradiction between the two sides.

When the platform recedes behind the interface, these two sides are the the most visible
antagonism - in a tug-of-war with each other - - yet neither the “producers” nor the “consumers” of
publications are becoming more wealthy, or working less to survive. If we turn the picture
sideways, however, a new contradiction emerges, between the indebted, living labor - of authors,
editors, translators, and readers - on one side, and on the other.. data centers, semiconductors,
mobile technology, expropriated software, power companies, and intellectual property.
The talk in the data center industry of the “industrialization” of the cloud refers to the scientific
approach to improving design, efficiency, and performance. But the term also recalls the basic
narrative of the Industrial Revolution: the movement from home-based manufacturing by hand to
large-scale production in factories. As desktop computers pass into obsolescence, we shift from a
networked, but small-scale, relationship to computation (think of “home publishing”) to a
reorganized form of production that puts the accumulated energy of millions to work through
these cloud companies and their modernized data centers.

What kind of buildings are these blank superstructures? Factories for the 21st century? An engineer
named Ken Patchett described the Facebook data center that way in a television interview, “This is
a factory. It’s just a different kind of factory than you might be used to.” Those factories that we’re
“used to,” continue to exist (at Foxconn, for instance) producing the infrastructure, under
recognizably exploitative conditions, for a “different kind of factory,” - a factory that extends far
beyond the walls of the data center.

But the idea of the factory is only part of the picture - this building is also a mine.. and the
dispersed workforce devote most of their waking hours to mining-in-reverse, packing it full of data,
under the expectation that someone - soon - will figure out how to pull out something valuable.

Both metaphors rely on the image of a mass of workers (dispersed as it may be) and leave a darker
and more difficult possibility: the data center is like the hydroelectric plant, damming up property,
sociality, creativity and knowledge, while engineers and financiers look for the algorithms to
release the accumulated cultural and social resources on demand, as profit.

This returns us to the interface, site of the struggles over the management and control of access to
property and infrastructure. Previously, these struggles were situated within the computer-object
and the implied freedom provided by its computation, storage, and possibilities for connection
with others. Now, however, the eviscerated device is more interface than object, and it is exactly
here at the interface that the new technological enclosures have taken form (for example, see
Apple's iOS products, Google's search box, and Amazon's "marketplace"). Control over the
interface is guaranteed by control over the entire techno-business stack: the distributed hardware
devices, centralized data centers, and the software that mediates the space between. Every major
technology corporation must now operate on all levels to protect against any loss.

There is a centripetal force to the cloud and this essay has been written in its irresistible pull. In
spite of the sheer mass of capital that is organized to produce this gravity and the seeming
insurmountability of it all, there is no chance that the system will absolutely manage and control
the noise within it. Riots break out on the factory floor; algorithmic trading wreaks havoc on the
stock market in an instant; data centers go offline; 100 million Facebook accounts are discovered
to be fake; the list will go on. These cracks in the interface don't point to any possible future, or
any desirable one, but they do draw attention to openings that might circumvent the logic of
access.

"What happens from there is another question." This is where I left things off in the text when I
finished it a year ago. It's a disappointing ending: we just have to invent ways of occupying the
destruction, violence and collapse that emerge out of economic inequality, global warming,
dismantled social welfare, and so on. And there's not much that's happened since then to make us
very optimistic - maybe here I only have to mention the NSA. But as I began with an ending, I
really should end at a beginning.
I think we were obliged to adopt a negative, critical position in response the cyber-utopianism of

the last almost 20 years, whether in its naive or cynical forms. We had to identify and theorize the
darker side of things. But it can become habitual, and when the dark side materializes, as it has
over the past few years - so that everyone knows the truth - then the obligation flips around,
doesn't it? To break out of habitual criticism as the tacit, defeated acceptance of what is. But, what
could be? Where do we find new political imaginaries? Not to ask what is the bright side, or what
can we do to cope, but what are the genuinely emancipatory possibilities that are somehow still
latent, buried under the present - or emerging within those ruptures in it? - - - I can't make it all
the way to a happy ending, to a happy beginning, but at least it's a beginning and not the end.

Mattern
Making Knowledge Available
2018


# Making Knowledge Available

## The media of generous scholarship

[Shannon Mattern](http://www.publicseminar.org/author/smattern/ "Posts by
Shannon Mattern") -- [March 22, 2018](http://www.publicseminar.org/2018/03
/making-knowledge-available/ "Permalink to Making Knowledge Available")

[__ 0](http://www.publicseminar.org/2018/03/making-knowledge-
available/#respond)

[__](http://www.facebook.com/sharer.php?u=http%3A%2F%2Fwww.publicseminar.org%2F2018%2F03
%2Fmaking-knowledge-available%2F&t=Making+Knowledge+Available "Share on
Facebook")[__](https://twitter.com/home?status=Making+Knowledge+Available+http%3A%2F%2Fwww.publicseminar.org%2F2018%2F03
%2Fmaking-knowledge-available%2F "Share on
Twitter")[__](https://plus.google.com/share?url=http%3A%2F%2Fwww.publicseminar.org%2F2018%2F03
%2Fmaking-knowledge-available%2F "Share on
Google+")[__](http://pinterest.com/pin/create/button/?url=http%3A%2F%2Fwww.publicseminar.org%2F2018%2F03
%2Fmaking-knowledge-available%2F&media=http://www.publicseminar.org/wp-
content/uploads/2018/03/6749000895_ea0145ed2d_o-150x150.jpg&description=Making
Knowledge Available "Share on Pinterest")

[ ![](http://www.publicseminar.org/wp-content/uploads/2018/03
/6749000895_ea0145ed2d_o-750x375.jpg) ](http://www.publicseminar.org/wp-
content/uploads/2018/03/6749000895_ea0145ed2d_o.jpg "Making Knowledge
Available")

__Visible Knowledge © Jasinthan Yoganathan | Flickr

A few weeks ago, shortly after reading that Elsevier, the world’s largest
academic publisher, had made over €1 billion in profit in 2017, I received
notice of a new journal issue on decolonization and media.* “Decolonization”
denotes the dismantling of imperialism, the overturning of systems of
domination, and the founding of new political orders. Recalling Achille
Mbembe’s exhortation that we seek to decolonize our knowledge production
practices and institutions, I looked forward to exploring this new collection
of liberated learning online – amidst that borderless ethereal terrain where
information just wants to be free. (…Not really.)

Instead, I encountered a gate whose keeper sought to extract a hefty toll: $42
to rent a single article for the day, or $153 to borrow it for the month. The
keeper of that particular gate, mega-publisher Taylor & Francis, like the
keepers of many other epistemic gates, has found toll-collecting to be quite a
profitable business. Some of the largest academic publishers have, in recent
years, achieved profit margins of nearly 40%, higher than those of Apple and
Google. Granted, I had access to an academic library and an InterLibrary Loan
network that would help me to circumvent the barriers – yet I was also aware
of just how much those libraries were paying for that access on my behalf; and
of all the un-affiliated readers, equally interested and invested in
decolonization, who had no academic librarians to serve as their liaisons.

I’ve found myself standing before similar gates in similar provinces of
paradox: the scholarly book on “open data” that sells for well over $100; the
conference on democratizing the “smart city,” where tickets sell for ten times
as much. Librarian Ruth Tillman was [struck with “acute irony
poisoning”](https://twitter.com/ruthbrarian/status/932701152839454720) when
she encountered a costly article on rent-seeking and value-grabbing in a
journal of capitalism and socialism, which was itself rentable by the month
for a little over $900.

We’re certainly not the first to acknowledge the paradox. For decades, many
have been advocating for open-access publishing, authors have been campaigning
for less restrictive publishing agreements, and librarians have been
negotiating with publishers over exorbitant subscription fees. That fight
continues: in mid-February, over 100 libraries in the UK and Ireland
[submitted a letter](https://www.sconul.ac.uk/page/open-letter-to-the-
management-of-the-publisher-taylor-francis) to Taylor & Francis protesting
their plan to lock up content more than 20 years old and sell it as a separate
package.

My coterminous discoveries of Elsevier’s profit and that decolonization-
behind-a-paywall once again highlighted the ideological ironies of academic
publishing, prompting me to [tweet
something](https://twitter.com/shannonmattern/status/969418644240420865) half-
baked about academics perhaps giving a bit more thought to whether the
politics of their publishing  _venues_  – their media of dissemination –
matched the politics they’re arguing for in their research. Maybe, I proposed,
we aren’t serving either ourselves or our readers very well by advocating for
social justice or “the commons” – or sharing progressive research on labor
politics and care work and the elitism of academic conventions – in journals
that extract huge profits from free labor and exploitative contracts and fees.

Despite my attempt to drown my “call to action” in a swamp of rhetorical
conditionals – “maybe” I was “kind-of” hedging “just a bit”? – several folks
quickly, and constructively, pointed out some missing nuances in my tweet.
[Librarian and LIS scholar Emily Drabinski
noted](https://twitter.com/edrabinski/status/969629307147563008) the dangers
of suggesting that individual “bad actors” are to blame for the hypocrisies
and injustices of a broken system – a system that includes authors, yes, but
also publishers of various ideological orientations, libraries, university
administrations, faculty review committees, hiring committees, accreditors,
and so forth.

And those authors are not a uniform group. Several junior scholars replied to
say that they think  _a lot_  about the power dynamics of academic publishing
(many were “hazed,” at an early age, into the [Impact
Factor](https://en.wikipedia.org/wiki/Impact_factor) Olympics, encouraged to
obsessively count citations and measure “prestige”). They expressed a desire
to experiment with new modes and media of dissemination, but lamented that
they had to bracket their ethical concerns and aesthetic aspirations. Because
tenure. Open-access publications, and more-creative-but-less-prestigious
venues, “don’t count.” Senior scholars chimed in, too, to acknowledge that
scholars often publish in different venues at different times for different
purposes to reach different audiences (I’d add, as well, that some
conversations need to happen in enclosed, if not paywalled, environments
because “openness” can cultivate dangerous vulnerabilities). Some also
concluded that, if we want to make “open access” and public scholarship – like
that featured in  _Public Seminar_  – “count,” we’re in for a long battle: one
that’s best waged within big professional scholarly associations. Even then,
there’s so much entrenched convention – so many naturalized metrics and
administrative structures and cultural habits – that we’re kind-of stuck with
these rentier publishers (to elevate the ingrained irony: in August 2017,
Elsevier acquired bepress, an open-access digital repository used by many
academic institutions). They need our content and labor, which we willing give
away for free, because we need their validation even more.

All this is true. Still, I’d prefer to think that we  _can_ actually resist
rentierism, reform our intellectual infrastructures, and maybe even make some
progress in “decolonizing” the institution over the next years and decades. As
a mid-career scholar, I’d like to believe that my peers and I, in
collaboration with our junior colleagues and colleagues-to-be, can espouse new
values – which include attention to the political, ethical, and even aesthetic
dimensions of the means and  _media_ through which we do our scholarship – in
our search committees, faculty reviews, and juries. Change  _can_  happen at
the local level; one progressive committee can set an example for another, and
one college can do the same. Change can take root at the mega-institutional
scale, too. Several professional organizations, like the Modern Language
Association and many scientific associations, have developed policies and
practices to validate open-access publishing. We can look, for example, to the
[MLA Commons](https://mla.hcommons.org/) and the [Manifold publishing
platform](https://manifold.umn.edu/). We can also look to Germany, where a
nationwide consortium of libraries, universities, and research institutes has
been battling Elsevier since 2016 over their subscription and access policies.
Librarians have long been advocates for ethical publishing, and [as Drabinski
explains](https://crln.acrl.org/index.php/crlnews/article/view/9568/10924),
they’re equipped to consult with scholars and scholarly organizations about
the publication media and platforms that best reinforce their core values.
Those values are the chief concern of the [HuMetricsHSS
initiative](http://humetricshss.org/about-2/), which is imagining a “more
humane,” values-based framework for evaluating scholarly work.

We also need to acknowledge the work of those who’ve been advocating for
similar ideals – and working toward a more ethically reflective publishing
culture – for years. Let’s consider some examples from the humanities and
social sciences – like the path-breaking [Institute for the Future of the
Book](http://www.futureofthebook.org/), which provided the platform where my
colleague McKenzie Wark publicly edited his [ _Gamer
Theory_](http://futureofthebook.org/gamertheory2.0/) back in 2006. Wark’s book
began online and became a print book, published by Harvard. Several
institutions – MIT; [Minnesota](https://www.upress.umn.edu/book-
division/series/forerunners-ideas-first); [Columbia’s Graduate School of
Architecture, Planning, and Preservation
](https://www.arch.columbia.edu/books)(whose publishing unit is led by a New
School alum, James Graham, who also happens to be a former thesis advisee);
Harvard’s [Graduate School of Design
](http://www.gsd.harvard.edu/publications/)and
[metaLab](http://www.hup.harvard.edu/collection.php?cpk=2006); and The New
School’s own [Vera List Center
](http://www.veralistcenter.org/engage/publications/1993/entry-pointsthe-vera-
list-center-field-guide-on-art-and-social-justice-no-1/)– have been
experimenting with the printed book. And individual scholars and
practitioners, like Nick Sousanis, who [published his
dissertation](http://www.hup.harvard.edu/catalog.php?isbn=9780674744431) as a
graphic novel, regard the bibliographic form as integral to their arguments.

Kathleen Fitzpatrick has also been a vibrant force for change, through her
work with the [MediaCommons](http://mediacommons.futureofthebook.org/) digital
scholarly network, her two [open-review ](http://www.plannedobsolescence.net
/peer-to-peer-review-and-its-aporias/)books, and [her
advocacy](http://www.plannedobsolescence.net/evolving-standards-and-practices-
in-tenure-and-promotion-reviews/) for more flexible, more thoughtful faculty
review standards. Her new manuscript,  _Generous Thinking_ , which lives up to
its name, proposes [public intellectualism
](https://generousthinking.hcommons.org/4-working-in-public/public-
intellectuals/)as one such generous practice and advocates for [its positive
valuation](https://generousthinking.hcommons.org/5-the-university/) within the
academy. “What would be required,” she asks, “for the university to begin
letting go of the notion of prestige and of the competition that creates it in
order to begin aligning its personnel processes with its deepest values?” Such
a realignment, I want to emphasize, need not mean a reduction in rigor, as
some have worried; we can still have standards, while insisting that they
correspond to our values. USC’s Tara McPherson has modeled generous and
careful scholarship through her own work and her collaborations in developing
the [Vectors](http://vectors.usc.edu/issues/index.php?issue=7) and
[Scalar](https://scalar.me/anvc/scalar/) publishing platforms, which launched
in 2005 and 2013, respectively.  _Public Seminar_  is [part of that long
tradition](http://www.publicseminar.org/2017/09/the-life-of-the-mind-online/),
too.

Individual scholars – particularly those who enjoy some measure of security –
can model a different pathway and advocate for a more sane, sustainable, and
inclusive publication and review system. Rather than blaming the “bad actors”
for making bad choices and perpetuating a flawed system, let’s instead
incentive the good ones to practice generosity.

In that spirit, I’d like to close by offering a passage I included in my own
promotion dossier, where I justified my choice to prioritize public
scholarship over traditional peer-reviewed venues. I aimed here to make my
values explicit. While I won’t know the outcome of my review for a few months,
and thus I can’t say whether or not this passage successfully served its
rhetorical purpose, I do hope I’ve convincingly argued here that, in
researching media and technology, one should also think critically about the
media one chooses to make that research public. I share this in the hope that
it’ll be useful to others preparing for their own job searches and faculty
reviews, or negotiating their own politics of practice. The passage is below.

* * *

…[A] concern with public knowledge infrastructures has… informed my choice of
venues for publication. Particularly since receiving tenure I’ve become much
more attuned to publication platforms themselves as knowledge infrastructures.
I’ve actively sought out venues whose operational values match the values I
espouse in my research – openness and accessibility (and, equally important,
good design!) – as well as those that The New School embraces through its
commitment to public scholarship and civic engagement. Thus, I’ve steered away
from those peer-reviewed publications that are secured behind paywalls and
rely on uncompensated editorial labor while their parent companies uphold
exploitative copyright policies and charge exorbitant subscription fees. I’ve
focused instead on open-access venues. Most of my articles are freely
available online, and even my 2015 book,  _Deep Mapping the Media City_ ,
published by the University of Minnesota Press, has been made available
through the Mellon Foundation-funded Manifold open-access publishing platform.
In those cases in which I have been asked to contribute work to a restricted
peer-reviewed journal or costly edited volume, I’ve often negotiated with the
publisher to allow me to “pre-print” my work as an article in an open-access
online venue, or to preview an un-edited copy.

I’ve been invited to address the ethics and epistemologies of scholarly
publishing and pedagogical platforms in a variety of venues, A, B, C, D, and
E. I also often chat with graduate students and junior scholars about their
own “publication politics” and appropriate venues for their work, and I review
their prospectuses and manuscripts.

The most personally rewarding and professionally valuable publishing
experience of my post-tenure career has been my collaboration with  _Places
Journal_ , a highly regarded non-profit, university-supported, open-access
venue for public scholarship on landscape, architecture, urbanism. After
having written thirteen (fifteen by Fall 2017) long-form pieces for  _Places_
since 2012, I’ve effectively assumed their “urban data and mediated spaces”
beat. I work with paid, professional editors who care not only about subject
matter – they’re just as much domain experts as any academic peer reviewer
I’ve encountered – but also about clarity and style and visual presentation.
My research and writing process for  _Places_ is no less time- and labor-
intensive, and the editorial process is no less rigorous, than would be
required for a traditional academic publication, but  _Places_  allows my work
to reach a global, interdisciplinary audience in a timely manner, via a
smartly designed platform that allows for rich illustration. This public
scholarship has a different “impact” than pay-walled publications in prestige
journals. Yet the response to my work on social media, the number of citations
it’s received (in both scholarly and popular literature), and the number of
invitations it’s generated, suggest the significant, if incalculable, value of
such alternative infrastructures for academic publishing. By making my work
open and accessible, I’ve still managed to meet many of the prestige- and
scarcity-driven markers of academic excellence (for more on my work’s impact,
see Appendix A).

_* I’ve altered some details so as to avoid sanctioning particular editors or
authors._

_Shannon Mattern is Associate Professor of Media Studies at The New School and
author of numerous books with University of Minnesota Press. Find her on
twitter[@shannonmattern](http://www.twitter.com/shannonmattern)._


 

Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.