dat in Dockray, Forster & Public Office 2018


are unwanted by official institutions or, worse,
buried beneath good intentions and bureaucracy, then what tools and platforms
and institutions might we develop instead?

While trying to both formulate and respond to these questions, we began making
Dat Library and HyperReadings:

**Dat Library** distributes libraries across many computers so that many
people can provide disk space and bandwidth, sharing in the labour and
responsibility of the archival infrastructure.

**HyperReadings** implements ‘reading lists’ or a structured set of pointers
(a list, a syllabus, a bibliography, etc.) into one or more libraries,
_activating_ the archives.

## Installation

The easiest way to get started is to install [Dat Library as a desktop
app](http://dat-dat-dat-library.hashbase.io), but there is also a programme
called ‘[datcat](http://github.com/sdockray/dat-cardcat)’, which can be run on
the command line or included in other NodeJS projects.

## Accidents o


s worldwide. When
expanding the scope to consider public, private, and community libraries, that
number becomes uncountable.

Published during the early days of the World Wide Web, the report acknowledges
the emerging role of digitization (“online databases, CD-ROM etc.”), but today
we might reflect on the last twenty years, which has also introduced new forms
of loss.

Digital archives and libraries are subject to a number of potential hazards:
technical accidents like disk failures, accidental deletions, misplaced data
and imperfect data migrations, as well as political-economic accidents like
defunding of the hosting institution, deaccessioning parts of the collection
and sudden restrictions of access rights. Immediately after library.nu was
shut down on the grounds of copyright in


e as sites for [biblioleaks](https://www.jmir.org/2014/4/e112/).
Furthermore, given the vulnerability of these archives, we ought to look for
alternative approaches that do not rule out using their resources, but which
also do not _depend_ on them.

Dat Library takes the concept of “a library of libraries” not to manifest it
in a single, universal library, but to realise it progressively and partially
with different individuals, groups and institutions.

## Archival properties

So far, the empha


reservation, but ultimately create a rarefied
relationship between the archives and their publics. Disregarding this
precious tendency toward preciousness, we also introduce _adaptability_ as a
fundamental consideration in the making of the projects Dat Library and
HyperReadings.

To adapt is to fit something for a new purpose. It emphasises that the archive
is not a dead object of research but a set of possible tools waiting to be
activated in new circumstances. This is always a possibility of an a


ting computers running mostly open-source
software can be the guts of an advanced capitalist engine, like Facebook. So,
could it be possible to organise our networked devices, embedded as they are
in a capitalist economy, in an anti-capitalist way?

Dat Library is built on the [Dat
Protocol](https://github.com/datproject/docs/blob/master/papers/dat-paper.md),
a peer-to-peer protocol for syncing folders of data. It is not the first
distributed protocol ([BitTorrent](https://en.wikipedia.org/wiki/BitTorrent)
is the best known and is noted as an inspiration for Dat), nor is it the only
new one being developed today ([IPFS](https://ipfs.io) or the Inter-Planetary
File System is often referenced in comparison), but it is unique in its
foundational goals of preserving scientific knowledge as a public good. Dat’s
provocation is that by creating custom infrastructure it will be possible to
overcome the accidents that restrict access to scientific knowledge. We would
specifically acknowledge here the role that the Dat community — or any
community around a protocol, for that matter — has in the formation of the
world that is built on top of that protocol. (For a sense of the Dat
community’s values — see its [code of conduct](https://github.com/datproject
/Code-of-Conduct/blob/master/CODE_OF_CONDUCT.md).)

When running Dat Library, a person sees their list of libraries. These can be
thought of as similar to a
[torrent](https://en.wikipedia.org/wiki/Torrent_file), where items are stored
across many computers. This means that many people will share in the provision
of di


onymous with accessibility — if
something can’t be accessed, it doesn’t exist. Here, we disentangle the two in
order to consider _access_ independent from questions of resilience.

##### Technically Accessible

When you create a new library in Dat, a unique 64-digit “key” will
automatically be generated for it. An example key is
`6f963e59e9948d14f5d2eccd5b5ac8e157ca34d70d724b41cb0f565bc01162bf`, which
points to a library of texts. In order for someone else to see the library you
have creat


unique key (by email,
chat, on paper or you could publish it on your website). In short, _you_
manage access to the library by copying that key, and then every key holder
also manages access _ad infinitum_.

At the moment this has its limitations. A Dat is only writable by a single
creator. If you want to collaboratively develop a library or reading list, you
need to have a single administrator managing its contents. This will change in
the near future with the integration of
[hyperdb](https://github.com/mafintosh/hyperdb) into Dat’s core. At that
point, the platform will enable multiple contributors and the management of
permissions, and our single key will become a key chain.

How is this key any different from knowing the domain name of a website? If a
site isn’t indexed


because they are shared, i.e., held in common.

It is important, while imagining the possibilities of a technological
protocol, to also consider how different _cultural protocols_ might be
implemented and protected through the life of a project like Dat Library.
Certain aspects of this might be accomplished through library metadata, but
ultimately it is through people hosting their own archives and libraries
(rather than, for example, having them hosted by a state institution) that
cultural protocol


ic group, but rather that
it should generate spaces that people can inhabit as they wish. The poet Jean
Paul once wrote that books are thick letters to friends. Books as
infrastructure enable authors to find their friends. This is how we ideally
see Dat Library and HyperReadings working.

## Use cases

We began work on Dat Library and HyperReadings with a range of exemplary use
cases, real-world circumstances in which these projects might intervene. Not
only would the use cases make demands on the software we were and still are
beginning to write, but they would also give us demands to make on the Dat
protocol, which is itself still in the formative stages of development. And,
crucially, in an iterative feedback loop, this process of design produces
transformative effects on those situations described in the use cases
themselves, resulting in furt


most invisible. On the other hand, if the
issues are presented together, with commentary and surrounding publications,
the political environment becomes palpable. Wendy and Chris have kindly
allowed us to make their personal collection available via Dat Library (the
key is: 73fd26846e009e1f7b7c5b580e15eb0b2423f9bea33fe2a5f41fac0ddb22cbdc), so
you can discover this for yourself.

### Academia.edu alternative

Academia.edu, started in 2008, has raised tens of millions of dollars as a
social network fo


rnal/2015/10/18/does-
academiaedu-mean-open-access-is-becoming-irrelevant.html) that “its financial
rationale rests … on the ability of the angel-investor and venture-capital-
funded professional entrepreneurs who run Academia.edu to exploit the data
flows generated by the academics who use the platform as an intermediary for
sharing and discovering research”. Moreover, he emphasises that in the open-
access world (outside of the exploitative practice of for-profit publishers
like Elsevier, who charge a premium for subscriptions), the privileged
position is to be the one “ _who gate-keeps the data generated around the use
of that content_ ”. This lucrative position has been produced by recent
“[recentralising tendencies](http://commonstransition.org/the-revolution-will-
not-be-decentralised-blockchains/)” of the internet, which in Acade


ies, personal web pages, and
other archives.

Is it possible to redecentralise? Can we break free of the subjectivities that
Academia.edu is crafting for us as we are interpellated by its infrastructure?
It is incredibly easy for any scholar running Dat Library to make a library of
their own publications and post the key to their faculty web page, Facebook
profile or business card. The tricky — and interesting — thing would be to
develop platforms that aggregate thousands of these libraries in direct
competition with Academia.edu. This way, individuals would maintain control
over their own work; their peer groups would assist in mirroring it; and no
one would be capitalising on the sale of data related to their performance and
popularity.

We note that Academia.edu is a typically centripetal platform: it provides no
tools for exporting one’s own content, so an alternative would necessarily be
a kind of centrifuge.

This alternative is becoming increasingly realistic. With open-access journals
already paving the way, there has more recently been a [call for free and open
access to citation data](https://www.insidehighered.com/news/2017/12/06
/scholars-push-free-access-online-citation-data-saying-they-need-and-deserve-
access). [The Initiative for Open Citations (I4OC)](https://i4oc.org) is
mobilising against the privatisation of data and working towards the
unrestricted availability of scholarly citation data. We see their new
database of citations as making this centrifugal force a possibility.

### Publication format

In writing this README, we have strung together several references. This
writing might be published in a book and the references will be


suppositions that amount to a
set of social propositions.

### The role of individuals in the age of distribution

Different people have different technical resources and capabilities, but
everyone can contribute to an archive. By simply running the Dat Library
software and adding an archive to it, a person is sharing their disk space and
internet bandwidth in the service of that archive. At first, it is only the
archive’s index (a list of the contents) that is hosted, but if the person
downloads


ise together to
guarantee the durability and accessibility of an archive, saving a future
UbuWeb from ever having to worry about if their ‘ISP pulling the plug’. As
supporters of many archives, as members of many communities, individuals can
use Dat Library to perform this function many times over.

On the Web, individuals are usually users or browsers — they use browsers. In
spite of the ostensible interactivity of the medium, users are kept at a
distance from the actual code, the infrastructure of a website, which is run
on a server. With a distributed protocol like Dat, applications such as
[Beaker Browser](https://beakerbrowser.com) or Dat Library eliminate the
central server, not by destroying it, but by distributing it across all of the
users. Individuals are then not _just_ users, but also hosts. What kind of
subject is this user-host, especially as compared to the user of the serve


ly written in a
[Git](https://en.wikipedia.org/wiki/Git)
[repository](https://en.wikipedia.org/wiki/Repository_\(version_control\)).
Git is a free and open-source tool for version control used in software
development. All the code for Hyperreadings, Dat Library and their numerous
associated modules are managed openly using Git and hosted on GitHub under
open source licenses. In a real way, Git’s specification formally binds our
collaboration as well as the open invitation for others to participate


dat in Dockray 2013


eat. Such repetition then gives way to copy-andpasting (or merely calling). The analogy here is to the robot, to the replacement of human labor
with technology.

Now, when a program is in the midst of being executed, the computer's memory fills with data -but some of that is obsolete, no longer necessary for that program to run. If left alone, the memory
would become clogged, the program would crash, the computer might crash. It is the role of the
garbage collector to free up memory, deleting what i


to every personcomputer. If the files were books.. then this collective collection would be a public library.

In order for a system like this to work, for the inputs and the outputs to actually engage with one
another to produce action or transmit data, there needs to be something in place already to enable
meaningful couplings. Before there is any interaction or any relationship, there must be some
common ground in place that allows heterogenous objects to ‘talk to each other’ (to use a phras


through low prices and wide selection is
the most visible platform for buying books and uses that position to push retailers and publishers
both to, at best, the bare minimum of profitability.

In addition to selling things to people and collecting data about its users (what they look at and
what they buy) to personalize product recommendations, Amazon has also made an effort to be a
platform for the technical and logistical parts of other retailers. Ultimately collecting data from
them as well, Amazon realizes a competitive advantage from having a comprehensive, up-to-theminute perspective on market trends and inventories. This volume of data is so vast and valuable
that warehouses packed with computers are constructed to store it, protect it, and make it readily
available to algorithms. Data centers, such as these, organize how commodities circulate (they run
business applications, store data about retail, manage fulfillment) but also - increasingly - they
hold the commodity itself - for example, the book. Digital book sales started the millennium very
slowly but by 2010 had overtaken hardcover sales.

Amazon’s store of digital books (or Apple’s or Google’s, for that matter) is a distorted reflection of
the collection circulating within the file-sharing network, displaced from personal computers to
corporate data centers. Here are two regimes of digital property: the swarm and the cloud. For
swarms (a reference to swarm downloading where a single file can be downloaded in parallel
from multiple sources) property is held in common between peers -- however, property is
positioned out of reach, on the cloud, accessible only through an interface that has absorbed legal
and business requirements.

It's just half of the story, however, to associate the cloud with mammoth data centers; the other
half is to be found in our hands and laps. Thin computing, including tablets and e-readers, iPads
and Kindles, and mobile phones have co-evolved with data centers, offering powerful, lightweight
computing precisely because so much processing and storage has been externalized.

In this technical configuration of the cloud, the thin computer and the fat data center meet through
an interface, inevitably clean and simple, that manages access to the remote resources. Typically,
a person needs to agree to certain “terms of service,” have a unique, measurable account, and
provide payment information; in


of
operation cut, and are in some cases being closed down entirely, and on the other side, the
traditional publishing industry finds its stores, books, and profits dematerialized, the image is
perhaps appropriate. Server racks, in photographs inside data centers, strike an eerie resemblance
to library stacks - - while e-readers are consciously designed to look and feel something like a
book. Yet, when one peers down into the screen of the device, one sees both the book - and the
library.

Like a Fac


arehouses of remote, secure hard
drives. But the cloud internalizes processing as well as storage, capturing the new forms of cooperation and collaboration characterizing the new economy and its immaterial labor. Social
relations are transmuted into database relations on the "social web," which absorbs selforganization as well. Because of this, the cloud impacts as strongly on the production of
publications, as on their consumption, in the tradition sense.

Storage, applications, and services offered in the cloud are marketed for consumption by authors
and publishers alike. Document editing, project management, and accounting are peeled slowly
away from the office staff and personal computers into the data centers; interfaces are established
into various publication channels from print on demand to digital book platforms. In the fully
realized vision of cloud publishing, the entire technical and logistical apparatus is externalized,
leaving only the h


ised by
themselves, but now without the omnipresent threat of legal prosecution. One has the sneaking
suspicion though.. that such a compromise is as hollow.. as the promises to a desperate city of the

jobs that will be created in a new constructed data center - - and that pitting “food on the table”
against “access to knowledge” is both a distraction from and a legitimation of the forms of power
emerging in the cloud. It's a distraction because it's by policing access to knowledge that the


ations are becoming more wealthy, or working less to survive. If we turn the picture
sideways, however, a new contradiction emerges, between the indebted, living labor - of authors,
editors, translators, and readers - on one side, and on the other.. data centers, semiconductors,
mobile technology, expropriated software, power companies, and intellectual property.
The talk in the data center industry of the “industrialization” of the cloud refers to the scientific
approach to improving design, efficiency, and performance. But the term also recalls the basic
narrative of the Industrial Revolution: the movement from home-based


nce, we shift from a
networked, but small-scale, relationship to computation (think of “home publishing”) to a
reorganized form of production that puts the accumulated energy of millions to work through
these cloud companies and their modernized data centers.

What kind of buildings are these blank superstructures? Factories for the 21st century? An engineer
named Ken Patchett described the Facebook data center that way in a television interview, “This is
a factory. It’s just a different kind of factory than you might be used to.” Those factories that we’re
“used to,” continue to exist (at Foxconn, for instance) producing the infrastructure, under
recognizably exploitative conditions, for a “different kind of factory,” - a factory that extends far
beyond the walls of the data center.

But the idea of the factory is only part of the picture - this building is also a mine.. and the
dispersed workforce devote most of their waking hours to mining-in-reverse, packing it full of data,
under the expectation that someone - soon - will figure out how to pull out something valuable.

Both metaphors rely on the image of a mass of workers (dispersed as it may be) and leave a darker
and more difficult possibility: the data center is like the hydroelectric plant, damming up property,
sociality, creativity and knowledge, while engineers and financiers look for the algorithms to
release the accumulated cultural and social resources on demand, as profit.

This returns us


osures have taken form (for example, see
Apple's iOS products, Google's search box, and Amazon's "marketplace"). Control over the
interface is guaranteed by control over the entire techno-business stack: the distributed hardware
devices, centralized data centers, and the software that mediates the space between. Every major
technology corporation must now operate on all levels to protect against any loss.

There is a centripetal force to the cloud and this essay has been written in its irresistible


gravity and the seeming
insurmountability of it all, there is no chance that the system will absolutely manage and control
the noise within it. Riots break out on the factory floor; algorithmic trading wreaks havoc on the
stock market in an instant; data centers go offline; 100 million Facebook accounts are discovered
to be fake; the list will go on. These cracks in the interface don't point to any possible future, or
any desirable one, but they do draw attention to openings that might circumvent th

 

Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.