Mattern
Making Knowledge Available
2018


# Making Knowledge Available

## The media of generous scholarship

[Shannon Mattern](http://www.publicseminar.org/author/smattern/ "Posts by
Shannon Mattern") -- [March 22, 2018](http://www.publicseminar.org/2018/03
/making-knowledge-available/ "Permalink to Making Knowledge Available")

[__ 0](http://www.publicseminar.org/2018/03/making-knowledge-
available/#respond)

[__](http://www.facebook.com/sharer.php?u=http%3A%2F%2Fwww.publicseminar.org%2F2018%2F03
%2Fmaking-knowledge-available%2F&t=Making+Knowledge+Available "Share on
Facebook")[__](https://twitter.com/home?status=Making+Knowledge+Available+http%3A%2F%2Fwww.publicseminar.org%2F2018%2F03
%2Fmaking-knowledge-available%2F "Share on
Twitter")[__](https://plus.google.com/share?url=http%3A%2F%2Fwww.publicseminar.org%2F2018%2F03
%2Fmaking-knowledge-available%2F "Share on
Google+")[__](http://pinterest.com/pin/create/button/?url=http%3A%2F%2Fwww.publicseminar.org%2F2018%2F03
%2Fmaking-knowledge-available%2F&media=http://www.publicseminar.org/wp-
content/uploads/2018/03/6749000895_ea0145ed2d_o-150x150.jpg&description=Making
Knowledge Available "Share on Pinterest")

[ ![](http://www.publicseminar.org/wp-content/uploads/2018/03
/6749000895_ea0145ed2d_o-750x375.jpg) ](http://www.publicseminar.org/wp-
content/uploads/2018/03/6749000895_ea0145ed2d_o.jpg "Making Knowledge
Available")

__Visible Knowledge © Jasinthan Yoganathan | Flickr

A few weeks ago, shortly after reading that Elsevier, the world’s largest
academic publisher, had made over €1 billion in profit in 2017, I received
notice of a new journal issue on decolonization and media.* “Decolonization”
denotes the dismantling of imperialism, the overturning of systems of
domination, and the founding of new political orders. Recalling Achille
Mbembe’s exhortation that we seek to decolonize our knowledge production
practices and institutions, I looked forward to exploring this new collection
of liberated learning online – amidst that borderless ethereal terrain where
information just wants to be free. (…Not really.)

Instead, I encountered a gate whose keeper sought to extract a hefty toll: $42
to rent a single article for the day, or $153 to borrow it for the month. The
keeper of that particular gate, mega-publisher Taylor & Francis, like the
keepers of many other epistemic gates, has found toll-collecting to be quite a
profitable business. Some of the largest academic publishers have, in recent
years, achieved profit margins of nearly 40%, higher than those of Apple and
Google. Granted, I had access to an academic library and an InterLibrary Loan
network that would help me to circumvent the barriers – yet I was also aware
of just how much those libraries were paying for that access on my behalf; and
of all the un-affiliated readers, equally interested and invested in
decolonization, who had no academic librarians to serve as their liaisons.

I’ve found myself standing before similar gates in similar provinces of
paradox: the scholarly book on “open data” that sells for well over $100; the
conference on democratizing the “smart city,” where tickets sell for ten times
as much. Librarian Ruth Tillman was [struck with “acute irony
poisoning”](https://twitter.com/ruthbrarian/status/932701152839454720) when
she encountered a costly article on rent-seeking and value-grabbing in a
journal of capitalism and socialism, which was itself rentable by the month
for a little over $900.

We’re certainly not the first to acknowledge the paradox. For decades, many
have been advocating for open-access publishing, authors have been campaigning
for less restrictive publishing agreements, and librarians have been
negotiating with publishers over exorbitant subscription fees. That fight
continues: in mid-February, over 100 libraries in the UK and Ireland
[submitted a letter](https://www.sconul.ac.uk/page/open-letter-to-the-
management-of-the-publisher-taylor-francis) to Taylor & Francis protesting
their plan to lock up content more than 20 years old and sell it as a separate
package.

My coterminous discoveries of Elsevier’s profit and that decolonization-
behind-a-paywall once again highlighted the ideological ironies of academic
publishing, prompting me to [tweet
something](https://twitter.com/shannonmattern/status/969418644240420865) half-
baked about academics perhaps giving a bit more thought to whether the
politics of their publishing  _venues_  – their media of dissemination –
matched the politics they’re arguing for in their research. Maybe, I proposed,
we aren’t serving either ourselves or our readers very well by advocating for
social justice or “the commons” – or sharing progressive research on labor
politics and care work and the elitism of academic conventions – in journals
that extract huge profits from free labor and exploitative contracts and fees.

Despite my attempt to drown my “call to action” in a swamp of rhetorical
conditionals – “maybe” I was “kind-of” hedging “just a bit”? – several folks
quickly, and constructively, pointed out some missing nuances in my tweet.
[Librarian and LIS scholar Emily Drabinski
noted](https://twitter.com/edrabinski/status/969629307147563008) the dangers
of suggesting that individual “bad actors” are to blame for the hypocrisies
and injustices of a broken system – a system that includes authors, yes, but
also publishers of various ideological orientations, libraries, university
administrations, faculty review committees, hiring committees, accreditors,
and so forth.

And those authors are not a uniform group. Several junior scholars replied to
say that they think  _a lot_  about the power dynamics of academic publishing
(many were “hazed,” at an early age, into the [Impact
Factor](https://en.wikipedia.org/wiki/Impact_factor) Olympics, encouraged to
obsessively count citations and measure “prestige”). They expressed a desire
to experiment with new modes and media of dissemination, but lamented that
they had to bracket their ethical concerns and aesthetic aspirations. Because
tenure. Open-access publications, and more-creative-but-less-prestigious
venues, “don’t count.” Senior scholars chimed in, too, to acknowledge that
scholars often publish in different venues at different times for different
purposes to reach different audiences (I’d add, as well, that some
conversations need to happen in enclosed, if not paywalled, environments
because “openness” can cultivate dangerous vulnerabilities). Some also
concluded that, if we want to make “open access” and public scholarship – like
that featured in  _Public Seminar_  – “count,” we’re in for a long battle: one
that’s best waged within big professional scholarly associations. Even then,
there’s so much entrenched convention – so many naturalized metrics and
administrative structures and cultural habits – that we’re kind-of stuck with
these rentier publishers (to elevate the ingrained irony: in August 2017,
Elsevier acquired bepress, an open-access digital repository used by many
academic institutions). They need our content and labor, which we willing give
away for free, because we need their validation even more.

All this is true. Still, I’d prefer to think that we  _can_ actually resist
rentierism, reform our intellectual infrastructures, and maybe even make some
progress in “decolonizing” the institution over the next years and decades. As
a mid-career scholar, I’d like to believe that my peers and I, in
collaboration with our junior colleagues and colleagues-to-be, can espouse new
values – which include attention to the political, ethical, and even aesthetic
dimensions of the means and  _media_ through which we do our scholarship – in
our search committees, faculty reviews, and juries. Change  _can_  happen at
the local level; one progressive committee can set an example for another, and
one college can do the same. Change can take root at the mega-institutional
scale, too. Several professional organizations, like the Modern Language
Association and many scientific associations, have developed policies and
practices to validate open-access publishing. We can look, for example, to the
[MLA Commons](https://mla.hcommons.org/) and the [Manifold publishing
platform](https://manifold.umn.edu/). We can also look to Germany, where a
nationwide consortium of libraries, universities, and research institutes has
been battling Elsevier since 2016 over their subscription and access policies.
Librarians have long been advocates for ethical publishing, and [as Drabinski
explains](https://crln.acrl.org/index.php/crlnews/article/view/9568/10924),
they’re equipped to consult with scholars and scholarly organizations about
the publication media and platforms that best reinforce their core values.
Those values are the chief concern of the [HuMetricsHSS
initiative](http://humetricshss.org/about-2/), which is imagining a “more
humane,” values-based framework for evaluating scholarly work.

We also need to acknowledge the work of those who’ve been advocating for
similar ideals – and working toward a more ethically reflective publishing
culture – for years. Let’s consider some examples from the humanities and
social sciences – like the path-breaking [Institute for the Future of the
Book](http://www.futureofthebook.org/), which provided the platform where my
colleague McKenzie Wark publicly edited his [ _Gamer
Theory_](http://futureofthebook.org/gamertheory2.0/) back in 2006. Wark’s book
began online and became a print book, published by Harvard. Several
institutions – MIT; [Minnesota](https://www.upress.umn.edu/book-
division/series/forerunners-ideas-first); [Columbia’s Graduate School of
Architecture, Planning, and Preservation
](https://www.arch.columbia.edu/books)(whose publishing unit is led by a New
School alum, James Graham, who also happens to be a former thesis advisee);
Harvard’s [Graduate School of Design
](http://www.gsd.harvard.edu/publications/)and
[metaLab](http://www.hup.harvard.edu/collection.php?cpk=2006); and The New
School’s own [Vera List Center
](http://www.veralistcenter.org/engage/publications/1993/entry-pointsthe-vera-
list-center-field-guide-on-art-and-social-justice-no-1/)– have been
experimenting with the printed book. And individual scholars and
practitioners, like Nick Sousanis, who [published his
dissertation](http://www.hup.harvard.edu/catalog.php?isbn=9780674744431) as a
graphic novel, regard the bibliographic form as integral to their arguments.

Kathleen Fitzpatrick has also been a vibrant force for change, through her
work with the [MediaCommons](http://mediacommons.futureofthebook.org/) digital
scholarly network, her two [open-review ](http://www.plannedobsolescence.net
/peer-to-peer-review-and-its-aporias/)books, and [her
advocacy](http://www.plannedobsolescence.net/evolving-standards-and-practices-
in-tenure-and-promotion-reviews/) for more flexible, more thoughtful faculty
review standards. Her new manuscript,  _Generous Thinking_ , which lives up to
its name, proposes [public intellectualism
](https://generousthinking.hcommons.org/4-working-in-public/public-
intellectuals/)as one such generous practice and advocates for [its positive
valuation](https://generousthinking.hcommons.org/5-the-university/) within the
academy. “What would be required,” she asks, “for the university to begin
letting go of the notion of prestige and of the competition that creates it in
order to begin aligning its personnel processes with its deepest values?” Such
a realignment, I want to emphasize, need not mean a reduction in rigor, as
some have worried; we can still have standards, while insisting that they
correspond to our values. USC’s Tara McPherson has modeled generous and
careful scholarship through her own work and her collaborations in developing
the [Vectors](http://vectors.usc.edu/issues/index.php?issue=7) and
[Scalar](https://scalar.me/anvc/scalar/) publishing platforms, which launched
in 2005 and 2013, respectively.  _Public Seminar_  is [part of that long
tradition](http://www.publicseminar.org/2017/09/the-life-of-the-mind-online/),
too.

Individual scholars – particularly those who enjoy some measure of security –
can model a different pathway and advocate for a more sane, sustainable, and
inclusive publication and review system. Rather than blaming the “bad actors”
for making bad choices and perpetuating a flawed system, let’s instead
incentive the good ones to practice generosity.

In that spirit, I’d like to close by offering a passage I included in my own
promotion dossier, where I justified my choice to prioritize public
scholarship over traditional peer-reviewed venues. I aimed here to make my
values explicit. While I won’t know the outcome of my review for a few months,
and thus I can’t say whether or not this passage successfully served its
rhetorical purpose, I do hope I’ve convincingly argued here that, in
researching media and technology, one should also think critically about the
media one chooses to make that research public. I share this in the hope that
it’ll be useful to others preparing for their own job searches and faculty
reviews, or negotiating their own politics of practice. The passage is below.

* * *

…[A] concern with public knowledge infrastructures has… informed my choice of
venues for publication. Particularly since receiving tenure I’ve become much
more attuned to publication platforms themselves as knowledge infrastructures.
I’ve actively sought out venues whose operational values match the values I
espouse in my research – openness and accessibility (and, equally important,
good design!) – as well as those that The New School embraces through its
commitment to public scholarship and civic engagement. Thus, I’ve steered away
from those peer-reviewed publications that are secured behind paywalls and
rely on uncompensated editorial labor while their parent companies uphold
exploitative copyright policies and charge exorbitant subscription fees. I’ve
focused instead on open-access venues. Most of my articles are freely
available online, and even my 2015 book,  _Deep Mapping the Media City_ ,
published by the University of Minnesota Press, has been made available
through the Mellon Foundation-funded Manifold open-access publishing platform.
In those cases in which I have been asked to contribute work to a restricted
peer-reviewed journal or costly edited volume, I’ve often negotiated with the
publisher to allow me to “pre-print” my work as an article in an open-access
online venue, or to preview an un-edited copy.

I’ve been invited to address the ethics and epistemologies of scholarly
publishing and pedagogical platforms in a variety of venues, A, B, C, D, and
E. I also often chat with graduate students and junior scholars about their
own “publication politics” and appropriate venues for their work, and I review
their prospectuses and manuscripts.

The most personally rewarding and professionally valuable publishing
experience of my post-tenure career has been my collaboration with  _Places
Journal_ , a highly regarded non-profit, university-supported, open-access
venue for public scholarship on landscape, architecture, urbanism. After
having written thirteen (fifteen by Fall 2017) long-form pieces for  _Places_
since 2012, I’ve effectively assumed their “urban data and mediated spaces”
beat. I work with paid, professional editors who care not only about subject
matter – they’re just as much domain experts as any academic peer reviewer
I’ve encountered – but also about clarity and style and visual presentation.
My research and writing process for  _Places_ is no less time- and labor-
intensive, and the editorial process is no less rigorous, than would be
required for a traditional academic publication, but  _Places_  allows my work
to reach a global, interdisciplinary audience in a timely manner, via a
smartly designed platform that allows for rich illustration. This public
scholarship has a different “impact” than pay-walled publications in prestige
journals. Yet the response to my work on social media, the number of citations
it’s received (in both scholarly and popular literature), and the number of
invitations it’s generated, suggest the significant, if incalculable, value of
such alternative infrastructures for academic publishing. By making my work
open and accessible, I’ve still managed to meet many of the prestige- and
scarcity-driven markers of academic excellence (for more on my work’s impact,
see Appendix A).

_* I’ve altered some details so as to avoid sanctioning particular editors or
authors._

_Shannon Mattern is Associate Professor of Media Studies at The New School and
author of numerous books with University of Minnesota Press. Find her on
twitter[@shannonmattern](http://www.twitter.com/shannonmattern)._


Dockray
Interface Access Loss
2013


Interface Access Loss

I want to begin this talk at the end -- by which I mean the end of property - at least according to
the cyber-utopian account of things, where digital file sharing and online communication liberate
culture from corporations and their drive for profit. This is just one of the promised forms of
emancipation -- property, in a sense, was undone. People, on a massive scale, used their
computers and their internet connections to share digitized versions of their objects with each
other, quickly producing a different, common form of ownership. The crisis that this provoked is
well-known -- it could be described in one word: Napster. What is less recognized - because it is
still very much in process - is the subsequent undoing of property, of both the private and common
kind. What follows is one story of "the cloud" -- the post-dot-com bubble techno-super-entity -which sucks up property, labor, and free time.

Object, Interface

It's debated whether the growing automation of production leads to global structural
unemployment or not -- Karl Marx wrote that "the self-expansion of capital by means of machinery
is thenceforward directly proportional to the number of the workpeople, whose means of
livelihood have been destroyed by that machinery" - but the promise is, of course, that when
robots do the work, we humans are free to be creative. Karl Kautsky predicted that increasing
automation would actually lead, not to a mass surplus population or widespread creativity, but
something much more mundane: the growth of clerks and bookkeepers, and the expansion of
unproductive sectors like "the banking system, the credit system, insurance empires and
advertising."

Marx was analyzing the number of people employed by some of the new industries in the middle
of the 19th century: "gas-works, telegraphy, photography, steam navigation, and railways." The
facts were that these industries were incredibly important, expansive and growing, highly
mechanized.. and employed a very small number of people. It is difficult not to read his study of
these technologies of connection and communication - against the background of our present
moment, in which the rise of the Internet has been accompanied by the deindustrialization of
cities, increased migrant and mobile labor, and jobs made obsolete by computation.

There are obvious examples of the impact of computation on the workplace: at factories and
distribution centers, robots engineered with computer-vision can replace a handful of workers,
with a savings of millions of dollars per robot over the life of the system. And there are less
apparent examples as well, like algorithms determining when and where to hire people and for
how long, according to fluctuating conditions.
Both examples have parallels within computer programming, namely reuse and garbage
collection. Code reuse refers to the practice of writing software in such a way that the code can be
used again later, in another program, to perform the same task. It is considered wasteful to give the
same time, attention, and energy to a function, because the development environment is not an
assembly line - a programmer shouldn't repeat. Such repetition then gives way to copy-andpasting (or merely calling). The analogy here is to the robot, to the replacement of human labor
with technology.

Now, when a program is in the midst of being executed, the computer's memory fills with data -but some of that is obsolete, no longer necessary for that program to run. If left alone, the memory
would become clogged, the program would crash, the computer might crash. It is the role of the
garbage collector to free up memory, deleting what is no longer in use. And here, I'm making the
analogy with flexible labor, workers being made redundant, and so on.

In Object-Oriented Programming, a programmer designs the software that she is writing around
“objects,” where each object is conceptually divided into “public” and “private” parts. The public
parts are accessible to other objects, but the private ones are hidden to the world outside the
boundaries of that object. It's a “black box” - a thing that can be known through its inputs and
outputs - even in total ignorance of its internal mechanisms. What difference does it make if the
code is written in one way versus an other .. if it behaves the same? As William James wrote, “If no
practical difference whatever can be traced, then the alternatives mean practically the same thing,
and all dispute is idle.”

By merely having a public interface, an object is already a social entity. It makes no sense to even
provide access to the outside if there are no potential objects with which to interact! So to

understand the object-oriented program, we must scale up - not by increasing the size or
complexity of the object, but instead by increasing the number and types of objects such that their
relations become more dense. The result is an intricate machine with an on and an off state, rather
than a beginning and an end. Its parts are interchangeable -- provided that they reliably produce
the same behavior, the same inputs and outputs. Furthermore, this machine can be modified:
objects can be added and removed, changing but not destroying the machine; and it might be,
using Gerald Raunig’s appropriate term, “concatenated” with other machines.

Inevitably, this paradigm for describing the relationship between software objects spread outwards,
subsuming more of the universe outside of the immediate code. External programs, powerful
computers, banking institutions, people, and satellites have all been “encapsulated” and
“abstracted” into objects with inputs and outputs. Is this a conceptual reduction of the richness
and complexity of reality? Yes, but only partially. It is also a real description of how people,
institutions, software, and things are being brought into relationship with one another according to
the demands of networked computation.. and the expanding field of objects are exactly those
entities integrated into such a network.

Consider a simple example of decentralized file-sharing: its diagram might represent an objectoriented piece of software, but here each object is a person-computer, shown in potential relation
to every other person-computer. Files might be sent or received at any point in this machine,
which seems particularly oriented towards circulation and movement. Much remains private, but a
collection of files from every person is made public and opened up to the network. Taken as a
whole, the entire collection of all files - which on the one hand exceeds the storage capacity of
any one person’s technical hardware, is on the other hand entirely available to every personcomputer. If the files were books.. then this collective collection would be a public library.

In order for a system like this to work, for the inputs and the outputs to actually engage with one
another to produce action or transmit data, there needs to be something in place already to enable
meaningful couplings. Before there is any interaction or any relationship, there must be some
common ground in place that allows heterogenous objects to ‘talk to each other’ (to use a phrase
from the business casual language of the Californian Ideology). The term used for such a common
ground - especially on the Internet - is platform, a word for that which enables and anticipates

future action without directly producing it. A platform provides tools and resources to the objects
that run “on top” of the platform so that those objects don't need to have their own tools and
resources. In this sense, the platform offers itself as a way for objects to externalize (and reuse)
labor. Communication between objects is one of the most significant actions that a platform can
provide, but it requires that the objects conform some amount of their inputs and outputs to the
specifications dictated by the platform.

But haven’t I only introduced another coupling, instead of between two objects, this time between
the object and the platform? What I'm talking about with "couplings" is the meeting point between
things - in other words, an “interface.” In the terms of OOP, the interface is an abstraction that
defines what kinds of interaction are possible with an object. It maps out the public face of the
object in a way that is legible and accessible to other objects. Similarly, computer interfaces like
screens and keyboards are designed to meet with human interfaces like fingers and eyes, allowing
for a specific form of interaction between person and machine. Any coupling between objects
passes through some interface and every interface obscures as much as it reveals - it establishes
the boundary between what is public and what is private, what is visible and what is not. The
dominant aesthetic values of user interface design actually privilege such concealment as “good
design,” appealing to principles of simplicity, cleanliness, and clarity.
Cloud, Access

One practical outcome of this has been that there can be tectonic shifts behind the interface where entire systems are restructured or revolutionized - without any interruption, as long as the
interface itself remains essentially unchanged. In Pragmatism’s terms, a successful interface keeps
any difference (in back) from making a difference (in front). Using books again as an example: for
consumers to become accustomed to the initial discomfort of purchasing a product online instead
of from a shop, the interface needs to make it so that “buying a book” is something that could be
interchangeably accomplished either by a traditional bookstore or the online "marketplace"
equivalent. But behind the interface is Amazon, which through low prices and wide selection is
the most visible platform for buying books and uses that position to push retailers and publishers
both to, at best, the bare minimum of profitability.

In addition to selling things to people and collecting data about its users (what they look at and
what they buy) to personalize product recommendations, Amazon has also made an effort to be a
platform for the technical and logistical parts of other retailers. Ultimately collecting data from
them as well, Amazon realizes a competitive advantage from having a comprehensive, up-to-theminute perspective on market trends and inventories. This volume of data is so vast and valuable
that warehouses packed with computers are constructed to store it, protect it, and make it readily
available to algorithms. Data centers, such as these, organize how commodities circulate (they run
business applications, store data about retail, manage fulfillment) but also - increasingly - they
hold the commodity itself - for example, the book. Digital book sales started the millennium very
slowly but by 2010 had overtaken hardcover sales.

Amazon’s store of digital books (or Apple’s or Google’s, for that matter) is a distorted reflection of
the collection circulating within the file-sharing network, displaced from personal computers to
corporate data centers. Here are two regimes of digital property: the swarm and the cloud. For
swarms (a reference to swarm downloading where a single file can be downloaded in parallel
from multiple sources) property is held in common between peers -- however, property is
positioned out of reach, on the cloud, accessible only through an interface that has absorbed legal
and business requirements.

It's just half of the story, however, to associate the cloud with mammoth data centers; the other
half is to be found in our hands and laps. Thin computing, including tablets and e-readers, iPads
and Kindles, and mobile phones have co-evolved with data centers, offering powerful, lightweight
computing precisely because so much processing and storage has been externalized.

In this technical configuration of the cloud, the thin computer and the fat data center meet through
an interface, inevitably clean and simple, that manages access to the remote resources. Typically,
a person needs to agree to certain “terms of service,” have a unique, measurable account, and
provide payment information; in return, access is granted. This access is not ownership in the
conventional sense of a book, or even the digital sense of a file, but rather a license that gives the
person a “non-exclusive right to keep a permanent copy… solely for your personal and noncommercial use,” contradicting the First Sale Doctrine, which gives the “owner” the right to sell,
lease, or rent their copy to anyone they choose at any price they choose. The doctrine,

established within America's legal system in 1908, separated the rights of reproduction, from
distribution, as a way to "exhaust" the copyright holder's control over the commodities that people
purchased.. legitimizing institutions like used book stores and public libraries. Computer software
famously attempted to bypass the First Sale Doctrine with its "shrink wrap" licenses that restricted
the rights of the buyer once she broke through the plastic packaging to open the product. This
practice has only evolved and become ubiquitous over the last three decades as software began
being distributed digitally through networks rather than as physical objects in stores. Such
contradictions are symptoms of the shift in property regimes, or what Jeremy Rifkin called “the age
of access.” He writes that “property continues to exist but is far less likely to be exchanged in
markets. Instead, suppliers hold on to property in the new economy and lease, rent, or charge an
admission fee, subscription, or membership dues for its short-term use.”

Thinking again of books, Rifkin’s description gives the image of a paid library emerging as the
synthesis of the public library and the marketplace for commodity exchange. Considering how, on
the one side, traditional public libraries are having their collections deaccessioned, hours of
operation cut, and are in some cases being closed down entirely, and on the other side, the
traditional publishing industry finds its stores, books, and profits dematerialized, the image is
perhaps appropriate. Server racks, in photographs inside data centers, strike an eerie resemblance
to library stacks - - while e-readers are consciously designed to look and feel something like a
book. Yet, when one peers down into the screen of the device, one sees both the book - and the
library.

Like a Facebook account, which must uniquely correspond to a real person, the e-reader is an
individualizing device. It is the object that establishes trusted access with books stored in the cloud
and ensures that each and every person purchases their own rights to read each book. The only
transfer that is allowed is of the device itself, which is the thing that a person actually does own.
But even then, such an act must be reported back to the cloud: the hardware needs to be deregistered and then re-registered with credit card and authentication details about the new owner.

This is no library - or it's only a library in the most impoverished sense of the word. It is a new
enclosure, and it is a familiar story: things in the world (from letters, to photographs, to albums, to
books) are digitized (as emails, JPEGs, MP3s, and PDFs) and subsequently migrate to a remote

location or service (Gmail, Facebook, iTunes, Kindle Store). The middle phase is the biggest
disruption, when the interface does the poorest job concealing the material transformations taking
place, when the work involved in creating those transformations is most apparent, often because
the person themselves is deeply involved in the process (of ripping vinyl, for instance). In the third
phase, the user interface becomes easier, more “frictionless,” and what appears to be just another
application or folder on one’s computer is an engorged, property-and-energy-hungry warehouse a
thousand miles away.

Capture, Loss

Intellectual property's enclosure is easy enough to imagine in warehouses of remote, secure hard
drives. But the cloud internalizes processing as well as storage, capturing the new forms of cooperation and collaboration characterizing the new economy and its immaterial labor. Social
relations are transmuted into database relations on the "social web," which absorbs selforganization as well. Because of this, the cloud impacts as strongly on the production of
publications, as on their consumption, in the tradition sense.

Storage, applications, and services offered in the cloud are marketed for consumption by authors
and publishers alike. Document editing, project management, and accounting are peeled slowly
away from the office staff and personal computers into the data centers; interfaces are established
into various publication channels from print on demand to digital book platforms. In the fully
realized vision of cloud publishing, the entire technical and logistical apparatus is externalized,
leaving only the human labor.. and their thin devices remaining. Little distinguishes the authorobject from the editor-object from the reader-object. All of them.. maintain their position in the
network by paying for lightweight computers and their updates, cloud services, and broadband
internet connections.
On the production side of the book, the promise of the cloud is a recovery of the profits “lost” to
file-sharing, as all that exchange is disciplined, standardized and measured. Consumers are finally
promised the access to the history of human knowledge that they had already improvised by
themselves, but now without the omnipresent threat of legal prosecution. One has the sneaking
suspicion though.. that such a compromise is as hollow.. as the promises to a desperate city of the

jobs that will be created in a new constructed data center - - and that pitting “food on the table”
against “access to knowledge” is both a distraction from and a legitimation of the forms of power
emerging in the cloud. It's a distraction because it's by policing access to knowledge that the
middle-man platform can extract value from publication, both on the writing and reading sides of
the book; and it's a legitimation because the platform poses itself as the only entity that can resolve
the contradiction between the two sides.

When the platform recedes behind the interface, these two sides are the the most visible
antagonism - in a tug-of-war with each other - - yet neither the “producers” nor the “consumers” of
publications are becoming more wealthy, or working less to survive. If we turn the picture
sideways, however, a new contradiction emerges, between the indebted, living labor - of authors,
editors, translators, and readers - on one side, and on the other.. data centers, semiconductors,
mobile technology, expropriated software, power companies, and intellectual property.
The talk in the data center industry of the “industrialization” of the cloud refers to the scientific
approach to improving design, efficiency, and performance. But the term also recalls the basic
narrative of the Industrial Revolution: the movement from home-based manufacturing by hand to
large-scale production in factories. As desktop computers pass into obsolescence, we shift from a
networked, but small-scale, relationship to computation (think of “home publishing”) to a
reorganized form of production that puts the accumulated energy of millions to work through
these cloud companies and their modernized data centers.

What kind of buildings are these blank superstructures? Factories for the 21st century? An engineer
named Ken Patchett described the Facebook data center that way in a television interview, “This is
a factory. It’s just a different kind of factory than you might be used to.” Those factories that we’re
“used to,” continue to exist (at Foxconn, for instance) producing the infrastructure, under
recognizably exploitative conditions, for a “different kind of factory,” - a factory that extends far
beyond the walls of the data center.

But the idea of the factory is only part of the picture - this building is also a mine.. and the
dispersed workforce devote most of their waking hours to mining-in-reverse, packing it full of data,
under the expectation that someone - soon - will figure out how to pull out something valuable.

Both metaphors rely on the image of a mass of workers (dispersed as it may be) and leave a darker
and more difficult possibility: the data center is like the hydroelectric plant, damming up property,
sociality, creativity and knowledge, while engineers and financiers look for the algorithms to
release the accumulated cultural and social resources on demand, as profit.

This returns us to the interface, site of the struggles over the management and control of access to
property and infrastructure. Previously, these struggles were situated within the computer-object
and the implied freedom provided by its computation, storage, and possibilities for connection
with others. Now, however, the eviscerated device is more interface than object, and it is exactly
here at the interface that the new technological enclosures have taken form (for example, see
Apple's iOS products, Google's search box, and Amazon's "marketplace"). Control over the
interface is guaranteed by control over the entire techno-business stack: the distributed hardware
devices, centralized data centers, and the software that mediates the space between. Every major
technology corporation must now operate on all levels to protect against any loss.

There is a centripetal force to the cloud and this essay has been written in its irresistible pull. In
spite of the sheer mass of capital that is organized to produce this gravity and the seeming
insurmountability of it all, there is no chance that the system will absolutely manage and control
the noise within it. Riots break out on the factory floor; algorithmic trading wreaks havoc on the
stock market in an instant; data centers go offline; 100 million Facebook accounts are discovered
to be fake; the list will go on. These cracks in the interface don't point to any possible future, or
any desirable one, but they do draw attention to openings that might circumvent the logic of
access.

"What happens from there is another question." This is where I left things off in the text when I
finished it a year ago. It's a disappointing ending: we just have to invent ways of occupying the
destruction, violence and collapse that emerge out of economic inequality, global warming,
dismantled social welfare, and so on. And there's not much that's happened since then to make us
very optimistic - maybe here I only have to mention the NSA. But as I began with an ending, I
really should end at a beginning.
I think we were obliged to adopt a negative, critical position in response the cyber-utopianism of

the last almost 20 years, whether in its naive or cynical forms. We had to identify and theorize the
darker side of things. But it can become habitual, and when the dark side materializes, as it has
over the past few years - so that everyone knows the truth - then the obligation flips around,
doesn't it? To break out of habitual criticism as the tacit, defeated acceptance of what is. But, what
could be? Where do we find new political imaginaries? Not to ask what is the bright side, or what
can we do to cope, but what are the genuinely emancipatory possibilities that are somehow still
latent, buried under the present - or emerging within those ruptures in it? - - - I can't make it all
the way to a happy ending, to a happy beginning, but at least it's a beginning and not the end.

 

Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.