Dekker & Barok
Copying as a Way to Start Something New A Conversation with Dusan Barok about Monoskop
2017


COPYING AS A WAY TO START SOMETHING NEW
A Conversation with Dusan Barok about Monoskop

Annet Dekker

Dusan Barok is an artist, writer, and cultural activist involved
in critical practice in the fields of software, art, and theory. After founding and organizing the online culture portal
Koridor in Slovakia from 1999–2002, in 2003 he co-founded
the BURUNDI media lab where he organized the Translab
evening series. A year later, the first ideas about building an
online platform for texts and media started to emerge and
Monoskop became a reality. More than a decade later, Barok
is well-known as the main editor of Monoskop. In 2016, he
began a PhD research project at the University of Amsterdam. His project, titled Database for the Documentation of
Contemporary Art, investigates art databases as discursive
platforms that provide context for artworks. In an extended
email exchange, we discuss the possibilities and restraints
of an online ‘archive’.
ANNET DEKKER

You started Monoskop in 2004, already some time ago. What
does the name mean?
DUSAN BAROK

‘Monoskop’ is the Slovak equivalent of the English ‘monoscope’, which means an electric tube used in analogue TV
broadcasting to produce images of test cards, station logotypes, error messages but also for calibrating cameras. Monoscopes were automatized television announcers designed to
speak to both live and machine audiences about the status
of a channel, broadcasting purely phatic messages.
AD
Can you explain why you wanted to do the project and how it
developed to what it is now? In other words, what were your
main aims and have they changed? If so, in which direction
and what caused these changes?
DB

I began Monoskop as one of the strands of the BURUNDI
media lab in Bratislava. Originally, it was designed as a wiki
website for documenting media art and culture in the eastern part of Europe, whose backbone consisted of city entries
composed of links to separate pages about various events,

212

LOST AND LIVING (IN) ARCHIVES

initiatives, and individuals. In the early days it was modelled
on Wikipedia (which had been running for two years when
Monoskop started) and contained biographies and descriptions of events from a kind of neutral point of view. Over
the years, the geographic and thematic boundaries have
gradually expanded to embrace the arts and humanities in
their widest sense, focusing primarily on lesser-known
1
phenomena.1 Perhaps the biggest change is the ongoing
See for example
shift from mapping people, events, and places towards
https://monoskop.org/
Features. Accessed
synthesizing discourses.
28 May 2016.
A turning point occurred during my studies at the
Piet Zwart Institute, in the Networked Media programme
from 2010–2012, which combined art, design, software,
and theory with support in the philosophy of open source
and prototyping. While there, I was researching aspects of
the networked condition and how it transforms knowledge,
sociality and economics: I wrote research papers on leaking
as a technique of knowledge production, a critique of the
social graph, and on the libertarian values embedded in the
design of digital currencies. I was ready for more practice.
When Aymeric Mansoux, one of the tutors, encouraged me
to develop my then side-project Monoskop into a graduation
work, the timing was good.
The website got its own domain, a redesign, and most
crucially, the Monoskop wiki was restructured from its
2
focus on media art and culture towards the much wider
https://monoskop.org/
embrace
of the arts and humanities. It turned to a media
Symposium. Accessed
28 May 2016.
library of sorts. The graduation work also consisted of
a symposium about personal collecting and media ar3
chiving,2 which saw its loose follow-ups on media aeshttps://monoskop.org/
thetics (in Bergen)3 and on knowledge classification and
The_Extensions_of_
Many. Accessed
archives (in Mons)4 last year.
28 May 2016.

AD

https://monoskop.org/
Ideographies_of_
Knowledge. Accessed
28 May 2016.

Did you have a background in library studies, or have
you taken their ideas/methods of systemization and categorization (meta data)? If not, what are your methods
and how did you develop them?

213

COPYING AS A WAY TO START SOMETHING NEW

4

been an interesting process, clearly showing the influence
of a changing back-end system. Are you interested in the
idea of sharing and circulating texts as a new way not just
of accessing and distributing but perhaps also of production—and publishing? I’m thinking how Aaaaarg started as
a way to share and exchange ideas about a text. In what
way do you think Monoskop plays (or could play) with these
kinds of mechanisms? Do you think it brings out a new
potential in publishing?

DB

Besides the standard literature in information science (I
have a degree in information technologies), I read some
works of documentation scientists Paul Otlet and Suzanne
Briet, historians such as W. Boyd Rayward and Ronald E.
Day, as well as translated writings of Michel Pêcheux and
other French discourse analysts of the 1960s and 1970s.
This interest was triggered in late 2014 by the confluence
of Femke’s Mondotheque project and an invitation to be an
artist-in-residence in Mons in Belgium at the Mundaneum,
home to Paul Otlet’s recently restored archive.
This led me to identify three tropes of organizing and
navigating written records, which has guided my thinking
about libraries and research ever since: class, reference,
and index. Classification entails tree-like structuring, such
as faceting the meanings of words and expressions, and
developing classification systems for libraries. Referencing
stands for citations, hyperlinking and bibliographies. Indexing ranges from the listing of occurrences of selected terms
to an ‘absolute’ index of all terms, enabling full-text search.
With this in mind, I have done a number of experiments.
There is an index of selected persons and terms from
5
across the Monoskop wiki and Log.5 There is a growing
https://monoskop.org/
list of wiki entries with bibliographies and institutional
Index. Accessed
28 May 2016.
infrastructures of fields and theories in the humanities.6
There is a lexicon aggregating entries from some ten
6
dictionaries of the humanities into a single page with
https://monoskop.org/
hyperlinks to each full entry (unpublished). There is an
Humanities. Accessed
28 May 2016.
alternative interface to the Monoskop Log, in which entries are navigated solely through a tag cloud acting as
a multidimensional filter (unpublished). There is a reader
containing some fifty books whose mutual references are
turned into hyperlinks, and whose main interface consists
of terms specific to each text, generated through tf-idf algorithm (unpublished). And so on.

DB

The publishing market frames the publication as a singular
body of work, autonomous from other titles on offer, and
subjects it to the rules of the market—with a price tag and
copyright notice attached. But for scholars and artists, these
are rarely an issue. Most academic work is subsidized from
public sources in the first place, and many would prefer to
give their work away for free since openness attracts more
citations. Why they opt to submit to the market is for quality
editing and an increase of their own symbolic value in direct
proportion to the ranking of their publishing house. This
is not dissimilar from the music industry. And indeed, for
many the goal is to compose chants that would gain popularity across academia and get their place in the popular
imagination.
On the other hand, besides providing access, digital
libraries are also fit to provide context by treating publications as a corpus of texts that can be accessed through an
unlimited number of interfaces designed with an understanding of the functionality of databases and an openness
to the imagination of the community of users. This can
be done by creating layers of classification, interlinking
bodies of texts through references, creating alternative
indexes of persons, things and terms, making full-text
search possible, making visual search possible—across
the whole of corpus as well as its parts, and so on. Isn’t
this what makes a difference? To be sure, websites such
as Aaaaarg and Monoskop have explored only the tip of

AD

Indeed, looking at the archive in many alternative ways has

214

LOST AND LIVING (IN) ARCHIVES

215

COPYING AS A WAY TO START SOMETHING NEW

the iceberg of possibilities. There is much more to tinker
and hack around.

within a given text and within a discourse in which it is
embedded. What is specific to digital text, however, is that
we can search it in milliseconds. Full-text search is enabled
by the index—search engines operate thanks to bots that
assign each expression a unique address and store it in a
database. In this respect, the index usually found at the
end of a printed book is something that has been automated
with the arrival of machine search.
In other words, even though knowledge in the age of the
internet is still being shaped by the departmentalization of
academia and its related procedures and rituals of discourse
production, and its modes of expression are centred around
the verbal rhetoric, the flattening effects of the index really
transformed the ways in which we come to ‘know’ things.
To ‘write’ a ‘book’ in this context is to produce a searchable
database instead.

AD

It is interesting that whilst the accessibility and search potential has radically changed, the content, a book or any other
text, is still a particular kind of thing with its own characteristics and forms. Whereas the process of writing texts seems
hard to change, would you be interested in creating more
alliances between texts to bring out new bibliographies? In
this sense, starting to produce new texts, by including other
texts and documents, like emails, visuals, audio, CD-ROMs,
or even un-published texts or manuscripts?
DB

Currently Monoskop is compiling more and more ‘source’
bibliographies, containing digital versions of actual texts
they refer to. This has been very much in focus in the past
two or three years and Monoskop is now home to hundreds
of bibliographies of twentieth-century artists, writers, groups,
and movements as well as of various theories and human7
ities disciplines.7 As the next step I would like to move
See for example
on to enabling full-text search within each such biblioghttps://monoskop.
org/Foucault,
raphy. This will make more apparent that the ‘source’
https://monoskop.
bibliography
is a form of anthology, a corpus of texts
org/Lissitzky,
https://monoskop.
representing a discourse. Another issue is to activate
org/Humanities.
cross-references
within texts—to turn page numbers in
All accessed
28 May 2016.
bibliographic citations inside texts into hyperlinks leading
to other texts.
This is to experiment further with the specificity of digital text. Which is different both to oral speech and printed
books. These can be described as three distinct yet mutually
encapsulated domains. Orality emphasizes the sequence
and narrative of an argument, in which words themselves
are imagined as constituting meaning. Specific to writing,
on the other hand, is referring to the written record; texts
are brought together by way of references, which in turn
create context, also called discourse. Statements are ‘fixed’
to paper and meaning is constituted by their contexts—both

216

LOST AND LIVING (IN) ARCHIVES

AD

So, perhaps we finally have come to ‘the death of the author’,
at least in so far as that automated mechanisms are becoming active agents in the (re)creation process. To return to
Monoskop in its current form, what choices do you make
regarding the content of the repositories, are there things
you don’t want to collect, or wish you could but have not
been able to?
DB

In a sense, I turned to a wiki and started Monoskop as
a way to keep track of my reading and browsing. It is a
by-product of a succession of my interests, obsessions, and
digressions. That it is publicly accessible is a consequence
of the fact that paper notebooks, text files kept offline and
private wikis proved to be inadequate at the moment when I
needed to quickly find notes from reading some text earlier.
It is not perfect, but it solved the issue of immediate access
and retrieval. Plus there is a bonus of having the body of
my past ten or twelve years of reading mutually interlinked
and searchable. An interesting outcome is that these ‘notes’
are public—one is motivated to formulate and frame them

217

COPYING AS A WAY TO START SOMETHING NEW

as to be readable and useful for others as well. A similar
difference is between writing an entry in a personal diary
and writing a blog post. That is also why the autonomy
of technical infrastructure is so important here. Posting
research notes on Facebook may increase one’s visibility
among peers, but the ‘terms of service’ say explicitly that
anything can be deleted by administrators at any time,
without any reason. I ‘collect’ things that I wish to be able
to return to, to remember, or to recollect easily.
AD

Can you describe the process, how do you get the books,
already digitized, or do you do a lot yourself? In other words,
could you describe the (technical) process and organizational aspects of the project?
DB

In the beginning, I spent a lot of time exploring other digital
libraries which served as sources for most of the entries on
Log (Gigapedia, Libgen, Aaaaarg, Bibliotik, Scribd, Issuu,
Karagarga, Google filetype:pdf). Later I started corresponding with a number of people from around the world (NYC,
Rotterdam, Buenos Aires, Boulder, Berlin, Ploiesti, etc.) who
contribute scans and links to scans on an irregular basis.
Out-of-print and open-access titles often come directly from
authors and publishers. Many artists’ books and magazines
were scraped or downloaded through URL manipulation
from online collections of museums, archives and libraries.
Needless to say, my offline archive is much bigger than
what is on Monoskop. I tend to put online the files I prefer
not to lose. The web is the best backup solution I have
found so far.
The Monoskop wiki is open for everyone to edit; any user
can upload their own works or scans and many do. Many of
those who spent more time working on the website ended up
being my friends. And many of my friends ended up having
an account as well :). For everyone else, there is no record
kept about what one downloaded, what one read and for
how long... we don’t care, we don’t track.

218

LOST AND LIVING (IN) ARCHIVES

AD

In what way has the larger (free) publishing context changed
your project, there are currently several free texts sharing
initiatives around (some already before you started like Textz.
com or Aaaaarg), how do you collaborate, or distinguish
from each other?
DB

It should not be an overstatement to say that while in the
previous decade Monoskop was shaped primarily by the
‘media culture’ milieu which it intended to document, the
branching out of its repository of highlighted publications
Monoskop Log in 2009, and the broadening of its focus to
also include the whole of the twentieth and twenty-first
century situates it more firmly in the context of online
archives, and especially digital libraries.
I only got to know others in this milieu later. I approached
Sean Dockray in 2010, Marcell Mars approached me the
following year, and then in 2013 he introduced me to Kenneth Goldsmith. We are in steady contact, especially through
public events hosted by various cultural centres and galleries.
The first large one was held at Ljubljana’s hackerspace Kiberpipa in 2012. Later came the conferences and workshops
organized by Kuda at a youth centre in Novi Sad (2013), by
the Institute of Network Cultures at WORM, Rotterdam (2014),
WKV and Akademie Schloss Solitude in Stuttgart (2014),
Mama & Nova Gallery in Zagreb (2015), ECC at Mundaneum,
Mons (2015), and most recently by the Media Department
8
of the University of Malmo (2016).8
For more information see,
The leitmotif of all these events was the digital library
https://monoskop.org/
Digital_libraries#
and their atmosphere can be described as the spirit of
Workshops_and_
early
hacker culture that eventually left the walls of a
conferences.
Accessed 28 May 2016.
computer lab. Only rarely there have been professional
librarians, archivists, and publishers among the speakers, even though the voices represented were quite diverse.
To name just the more frequent participants... Marcell
and Tom Medak (Memory of the World) advocate universal
access to knowledge informed by the positions of the Yugoslav

219

COPYING AS A WAY TO START SOMETHING NEW

Marxist school Praxis; Sean’s work is critical of the militarization and commercialization of the university (in the
context of which Aaaaarg will always come as secondary, as
an extension of The Public School in Los Angeles); Kenneth
aims to revive the literary avant-garde while standing on the
shoulders of his heroes documented on UbuWeb; Sebastian
Lütgert and Jan Berger are the most serious software developers among us, while their projects such as Textz.com and
Pad.ma should be read against critical theory and Situationist cinema; Femke Snelting has initiated the collaborative
research-publication Mondotheque about the legacy of the
early twentieth century Brussels-born information scientist
Paul Otlet, triggered by the attempt of Google to rebrand him
as the father of the internet.
I have been trying to identify implications of the digital-networked textuality for knowledge production, including humanities research, while speaking from the position
of a cultural worker who spent his formative years in the
former Eastern Bloc, experiencing freedom as that of unprecedented access to information via the internet following
the fall of Berlin Wall. In this respect, Monoskop is a way
to bring into ‘archival consciousness’ what the East had
missed out during the Cold War. And also more generally,
what the non-West had missed out in the polarized world,
and vice versa, what was invisible in the formal Western
cultural canons.
There have been several attempts to develop new projects,
and the collaborative efforts have materialized in shared
infrastructure and introductions of new features in respective platforms, such as PDF reader and full-text search on
Aaaaarg. Marcell and Tom along with their collaborators have
been steadily developing the Memory of the World library and
Sebastian resuscitated Textz.com. Besides that, there are
overlaps in titles hosted in each library, and Monoskop bibliographies extensively link to scans on Libgen and Aaaaarg,
while artists’ profiles on the website link to audio and video
recordings on UbuWeb.

220

LOST AND LIVING (IN) ARCHIVES

AD

It is interesting to hear that there weren’t any archivist or
professional librarians involved (yet), what is your position
towards these professional and institutional entities and
persons?
DB

As the recent example of Sci-Hub showed, in the age of
digital networks, for many researchers libraries are primarily free proxies to corporate repositories of academic
9
journals.9 Their other emerging role is that of a digital
For more information see,
repository of works in the public domain (the role piowww.sciencemag.org/
news/2016/04/whosneered in the United States by Project Gutenberg and
downloading-piratedInternet Archive). There have been too many attempts
papers-everyone.
Accessed 28 May 2016.
to transpose librarians’ techniques from the paperbound
world into the digital domain. Yet, as I said before, there
is much more to explore. Perhaps the most exciting inventive approaches can be found in the field of classics, for
example in the Perseus Digital Library & Catalog and the
Homer Multitext Project. Perseus combines digital editions
of ancient literary works with multiple lexical tools in a way
that even a non-professional can check and verify a disputable translation of a quote. Something that is hard to
imagine being possible in print.
AD

I think it is interesting to see how Monoskop and other
repositories like it have gained different constituencies
globally, for one you can see the kind of shift in the texts
being put up. From the start you tried to bring in a strong
‘eastern European voice’, nevertheless at the moment the
content of the repository reflects a very western perspective on critical theory, what are your future goals. And do
you think it would be possible to include other voices? For
example, have you ever considered the possibility of users
uploading and editing texts themselves?
DB

The site certainly started with the primary focus on east-central European media art and culture, which I considered

221

COPYING AS A WAY TO START SOMETHING NEW

myself to be part of in the early 2000s. I was naive enough
to attempt to make a book on the theme between 2008–2010.
During that period I came to notice the ambivalence of the
notion of medium in an art-historical and technological
sense (thanks to Florian Cramer). My understanding of
media art was that it is an art specific to its medium, very
much in Greenbergian terms, extended to the more recent
‘developments’, which were supposed to range from neo-geometrical painting through video art to net art.
At the same time, I implicitly understood art in the sense
of ‘expanded arts’, as employed by the Fluxus in the early
1960s—objects as well as events that go beyond the (academic) separation between the arts to include music, film,
poetry, dance, design, publishing, etc., which in turn made
me also consider such phenomena as experimental film,
electro-acoustic music and concrete poetry.
Add to it the geopolitically unstable notion of East-Central
Europe and the striking lack of research in this area and
all you end up with is a headache. It took me a while to
realize that there’s no point even attempting to write a coherent narrative of the history of media-specific expanded
arts of East-Central Europe of the past hundred years. I
ended up with a wiki page outlining the supposed mile10
stones along with a bibliography.10
https://monoskop.
For this strand, the wiki served as the main notebook,
org/CEE. Accessed
28 May 2016. And
leaving behind hundreds of wiki entries. The Log was
https://monoskop.
more or less a ‘log’ of my research path and the presence
org/Central_and_
Eastern_Europe_
of ‘western’ theory is to a certain extent a by-product of
Bibliography.
my search for a methodology and theoretical references.
Accessed 28 May 2016.
As an indirect outcome, a new wiki section was
launched recently. Instead of writing a history of mediaspecific ‘expanded arts’ in one corner of the world, it takes
a somewhat different approach. Not a sequential text, not
even an anthology, it is an online single-page annotated
index, a ‘meta-encyclopaedia’ of art movements and styles,
intended to offer an expansion of the art-historical canonical
prioritization of the western painterly-sculptural tradition

222

LOST AND LIVING (IN) ARCHIVES

11

https://monoskop.
org/Art. Accessed
28 May 2016.

to also include other artists and movements around the
world.11
AD

Can you say something about the longevity of the project?
You briefly mentioned before that the web was your best
backup solution. Yet, it is of course known that websites
and databases require a lot of maintenance, so what will
happen to the type of files that you offer? More and more
voices are saying that, for example, the PDF format is all
but stable. How do you deal with such challenges?
DB

Surely, in the realm of bits, nothing is designed to last
forever. Uncritical adoption of Flash had turned out to be
perhaps the worst tragedy so far. But while there certainly
were more sane alternatives if one was OK with renouncing its emblematic visual effects and aesthetics that went
with it, with PDF it is harder. There are EPUBs, but scholarly publications are simply unthinkable without page
numbers that are not supported in this format. Another
challenge the EPUB faces is from artists' books and other
design- and layout-conscious publications—its simplified
HTML format does not match the range of possibilities for
typography and layout one is used to from designing for
paper. Another open-source solution, PNG tarballs, is not
a viable alternative for sharing books.
The main schism between PDF and HTML is that one represents the domain of print (easily portable, and with fixed
page size), while the other the domain of web (embedded
within it by hyperlinks pointing both directions, and with
flexible page size). EPUB is developed with the intention of
synthetizing both of them into a single format, but instead
it reduces them into a third container, which is doomed to
reinvent the whole thing once again.
It is unlikely that there will appear an ultimate convertor
between PDF and HTML, simply because of the specificities
of print and the web and the fact that they overlap only in
some respects. Monoskop tends to provide HTML formats

223

COPYING AS A WAY TO START SOMETHING NEW

next to PDFs where time allows. And if the PDF were to
suddenly be doomed, there would be a big conversion party.
On the side of audio and video, most media files on
Monoskop are in open formats—OGG and WEBM. There
are many other challenges: keeping up-to-date with PHP
and MySQL development, with the MediaWiki software
and its numerous extensions, and the mysterious ICANN
organization that controls the web domain.

as an imperative to us to embrace redundancy, to promote
spreading their contents across as many nodes and sites
as anyone wishes. We may look at copying not as merely
mirroring or making backups, but opening up for possibilities to start new libraries, new platforms, new databases.
That is how these came about as well. Let there be Zzzzzrgs,
Ůbuwebs and Multiskops.

AD

What were your biggest challenges beside technical ones?
For example, have you ever been in trouble regarding copyright issues, or if not, how would you deal with such a
situation?
DB

Monoskop operates on the assumption of making transformative use of the collected material. The fact of bringing
it into certain new contexts, in which it can be accessed,
viewed and interpreted, adds something that bookstores
don’t provide. Time will show whether this can be understood as fair use. It is an opt-out model and it proves to
be working well so far. Takedowns are rare, and if they are
legitimate, we comply.
AD

Perhaps related to this question, what is your experience
with users engagement? I remember Sean (from Aaaaarg,
in conversation with Matthew Fuller, Mute 2011) saying
that some people mirror or download the whole site, not
so much in an attempt to ‘have everything’ but as a way
to make sure that the content remains accessible. It is a
conscious decision because one knows that one day everything might be taken down. This is of course particularly
pertinent, especially since while we’re doing this interview
Sean and Marcell are being sued by a Canadian publisher.
DB

That is absolutely true and any of these websites can disappear any time. Archives like Aaaaarg, Monoskop or UbuWeb
are created by makers rather than guardians and it comes

224

LOST AND LIVING (IN) ARCHIVES

225

COPYING AS A WAY TO START SOMETHING NEW

Bibliography
Fuller, Matthew. ‘In the Paradise of Too Many Books: An Interview with
Sean Dockray’. Mute, 4 May 2011. www.metamute.org/editorial/

articles/paradise-too-many-books-interview-seandockray. Accessed 31 May 2016.
Online digital libraries
Aaaaarg, http://aaaaarg.fail.
Bibliotik, https://bibliotik.me.
Issuu, https://issuu.com.
Karagarga, https://karagarga.in.
Library Genesis / LibGen, http://gen.lib.rus.ec.
Memory of the World, https://library.memoryoftheworld.org.
Monoskop, https://monoskop.org.
Pad.ma, https://pad.ma.
Scribd, https://scribd.com.
Textz.com, https://textz.com.
UbuWeb, www.ubu.com.

226

LOST AND LIVING (IN) ARCHIVES

227

COPYING AS A WAY TO START SOMETHING NEW

Dockray, Forster & Public Office
README.md
2018


## Introduction

How might we ensure the survival and availability of community libraries,
individual collections and other precarious archives? If these libraries,
archives and collections are unwanted by official institutions or, worse,
buried beneath good intentions and bureaucracy, then what tools and platforms
and institutions might we develop instead?

While trying to both formulate and respond to these questions, we began making
Dat Library and HyperReadings:

**Dat Library** distributes libraries across many computers so that many
people can provide disk space and bandwidth, sharing in the labour and
responsibility of the archival infrastructure.

**HyperReadings** implements ‘reading lists’ or a structured set of pointers
(a list, a syllabus, a bibliography, etc.) into one or more libraries,
_activating_ the archives.

## Installation

The easiest way to get started is to install [Dat Library as a desktop
app](http://dat-dat-dat-library.hashbase.io), but there is also a programme
called ‘[datcat](http://github.com/sdockray/dat-cardcat)’, which can be run on
the command line or included in other NodeJS projects.

## Accidents of the Archive

The 1996 UNESCO publication [Lost Memory: Libraries and Archives Destroyed in
the Twentieth Century](http://www.stephenmclaughlin.net/ph-
library/texts/UNESCO%201996%20-%20Lost%20Memory_%20Libraries%20and%20Archives%20Destroyed%20in%20the%20Twentieth%20Century.pdf)
makes the fragility of historical repositories startlingly clear. “[A]cidified
paper that crumbles to dust, leather, parchment, film and magnetic light
attacked by light, heat humidity or dust” all assault archives. “Floods,
fires, hurricanes, storms, earthquakes” and, of course, “acts of war,
bombardment and fire, whether deliberate or accidental” wiped out significant
portions of many hundreds of major research libraries worldwide. When
expanding the scope to consider public, private, and community libraries, that
number becomes uncountable.

Published during the early days of the World Wide Web, the report acknowledges
the emerging role of digitization (“online databases, CD-ROM etc.”), but today
we might reflect on the last twenty years, which has also introduced new forms
of loss.

Digital archives and libraries are subject to a number of potential hazards:
technical accidents like disk failures, accidental deletions, misplaced data
and imperfect data migrations, as well as political-economic accidents like
defunding of the hosting institution, deaccessioning parts of the collection
and sudden restrictions of access rights. Immediately after library.nu was
shut down on the grounds of copyright infringement in 2012, [Lawrence Liang
wrote](https://kafila.online/2012/02/19/library-nu-r-i-p/) of feeling “first
and foremost a visceral experience of loss.”

Whatever its legal status, the abrupt absence of a collection of 400,000 books
appears to follow a particularly contemporary pattern. In 2008, Aaron Swartz
moved millions of US federal court documents out from behind a paywall,
resulting in a trial and an FBI investigation. Three years later he was
arrested and indicted for a similar gesture, systematically downloading
academic journal articles from JSTOR. That year, Kazakhstani scientist
Alexandra Elbakyan began [Sci-Hub](https://en.wikipedia.org/wiki/Sci-Hub) in
response to scientific journal articles that were prohibitively expensive for
scholars based outside of Western academic institutions. (See
for further analysis and an alternative
approach to the same issues: “When everyone is librarian, library is
everywhere.”) The repository, growing to more than 60 millions papers, was
sued in 2015 by Elsevier for $15 million, resulting in a permanent injunction.
Library Genesis, another library of comparable scale, finds itself in a
similar legal predicament.

Arguably one of the largest digital archives of the “avant-garde” (loosely
defined), UbuWeb is transparent about this fragility. In 2011, its founder
[Kenneth Goldsmith wrote](http://www.ubu.com/resources/): “by the time you
read this, UbuWeb may be gone. […] Never meant to be a permanent archive, Ubu
could vanish for any number of reasons: our ISP pulls the plug, our university
support dries up, or we simply grow tired of it.” Even the banality of
exhaustion is a real risk to these libraries.

The simple fact is that some of these libraries are among the largest in the
world yet are subject to sudden disappearance. We can only begin to guess at
what the contours of “Lost Memory: Libraries and Archives Destroyed in the
Twenty-First Century” will be when it is written ninety years from now.

## Non-profit, non-state archives

Cultural and social movements have produced histories which are only partly
represented in state libraries and archives. Often they are deemed too small
or insignificant or, in some cases, dangerous. Most frequently, they are not
deemed to be anything at all — they are simply neglected. While the market,
eager for new resources to exploit, might occasionally fill in the gaps, it is
ultimately motivated by profit and not by responsibility to communities or
archives. (We should not forget the moment [Amazon silently erased legally
purchased copies of George Orwell’s
1984](http://www.nytimes.com/2009/07/18/technology/companies/18amazon.html)
from readers’ Kindle devices because of a change in the commercial agreement
with the publisher.)

So, what happens to these minor libraries? They are innumerable, but for the
sake of illustration let’s say that each could be represented by a single
book. Gathered together, these books would form a great library (in terms of
both importance and scale). But to extend the metaphor, the current reality
could be pictured as these books flying off their shelves to the furthest
reaches of the world, their covers flinging open and the pages themselves
scattering into bookshelves and basements, into the caring hands of relatives
or small institutions devoted to passing these words on to future generations.

While the massive digital archives listed above (library.nu, Library Genesis,
Sci-Hub, etc.) could play the role of the library of libraries, they tend to
be defined more as sites for [biblioleaks](https://www.jmir.org/2014/4/e112/).
Furthermore, given the vulnerability of these archives, we ought to look for
alternative approaches that do not rule out using their resources, but which
also do not _depend_ on them.

Dat Library takes the concept of “a library of libraries” not to manifest it
in a single, universal library, but to realise it progressively and partially
with different individuals, groups and institutions.

## Archival properties

So far, the emphasis of this README has been on _durability_ , and the
“accidents of the archive” have been instances of destruction and loss. The
persistence of an archive is, however, no guarantee of its _accessibility_ , a
common reality in digital libraries where access management is ubiquitous.
Official institutions police access to their archives vigilantly for the
ostensible purpose of preservation, but ultimately create a rarefied
relationship between the archives and their publics. Disregarding this
precious tendency toward preciousness, we also introduce _adaptability_ as a
fundamental consideration in the making of the projects Dat Library and
HyperReadings.

To adapt is to fit something for a new purpose. It emphasises that the archive
is not a dead object of research but a set of possible tools waiting to be
activated in new circumstances. This is always a possibility of an archive,
but we want to treat this possibility as desirable, as the horizon towards
which these projects move. We know how infrastructures can attenuate desire
and simply make things difficult. We want to actively encourage radical reuse.

In the following section, we don’t define these properties but rather discuss
how we implement (or fail to implement) them in software, while highlighting
some of the potential difficulties introduced.

### Durability

In 1964, in the midst of the “loss” of the twentieth-century, Paul Baran’s
RAND Corporation publication [On Distributed
Communications](https://www.rand.org/content/dam/rand/pubs/research_memoranda/2006/RM3420.pdf)
examined “redundancy as one means of building … highly survivable and reliable
communications systems”, thus midwifing the military foundations of the
digital networks that we operate within today. While the underlying framework
of the Internet generally follows distributed principles, the client–server/
request–response model of the HTTP protocol is highly centralised in practice
and is only as durable as the server.

Capitalism places a high value on originality and novelty, as exemplified in
art where the ultimate insult would to be the label “redundant”. Worse than
being derivative or merely unoriginal, being redundant means having no reason
to exist — a uselessness that art can’t tolerate. It means wasting a perfectly
good opportunity to be creative or innovative. In a relational network, on the
other hand, redundancy is a mode of support. It doesn’t stimulate competition
to capture its effects, but rather it is a product of cooperation. While this
attitude of redundancy arose within a Western military context, one can’t help
but notice that the shared resources, mutual support, and common
infrastructure seem fundamentally communist in nature. Computer networks are
not fundamentally exploitative or equitable, but they are used in specific
ways and they operate within particular economies. A redundant network of
interrelated, mutually supporting computers running mostly open-source
software can be the guts of an advanced capitalist engine, like Facebook. So,
could it be possible to organise our networked devices, embedded as they are
in a capitalist economy, in an anti-capitalist way?

Dat Library is built on the [Dat
Protocol](https://github.com/datproject/docs/blob/master/papers/dat-paper.md),
a peer-to-peer protocol for syncing folders of data. It is not the first
distributed protocol ([BitTorrent](https://en.wikipedia.org/wiki/BitTorrent)
is the best known and is noted as an inspiration for Dat), nor is it the only
new one being developed today ([IPFS](https://ipfs.io) or the Inter-Planetary
File System is often referenced in comparison), but it is unique in its
foundational goals of preserving scientific knowledge as a public good. Dat’s
provocation is that by creating custom infrastructure it will be possible to
overcome the accidents that restrict access to scientific knowledge. We would
specifically acknowledge here the role that the Dat community — or any
community around a protocol, for that matter — has in the formation of the
world that is built on top of that protocol. (For a sense of the Dat
community’s values — see its [code of conduct](https://github.com/datproject
/Code-of-Conduct/blob/master/CODE_OF_CONDUCT.md).)

When running Dat Library, a person sees their list of libraries. These can be
thought of as similar to a
[torrent](https://en.wikipedia.org/wiki/Torrent_file), where items are stored
across many computers. This means that many people will share in the provision
of disk space and bandwidth for a particular library, so that when someone
loses electricity or drops their computer, the library will not also break.
Although this is a technical claim — one that has been made in relation to
many projects, from Baran to BitTorrent — it is more importantly a social
claim: the users and lovers of a library will share the library. More than
that, they will share in the work of ensuring that it will continue to be
shared.

This is not dissimilar to the process of reading generally, where knowledge is
distributed and maintained through readers sharing and referencing the books
important to them. As [Peter Sloterdijk
describes](https://rekveld.home.xs4all.nl/tech/Sloterdijk_RulesForTheHumanZoo.pdf),
written philosophy is “reinscribed like a chain letter through the
generations, and despite all the errors of reproduction — indeed, perhaps
because of such errors — it has recruited its copyists and interpreters into
the ranks of brotherhood (sic)”. Or its sisterhood — but, the point remains
clear that the reading / writing / sharing of texts binds us together, even in
disagreement.

### Accessibility

In the world of the web, durability is synonymous with accessibility — if
something can’t be accessed, it doesn’t exist. Here, we disentangle the two in
order to consider _access_ independent from questions of resilience.

##### Technically Accessible

When you create a new library in Dat, a unique 64-digit “key” will
automatically be generated for it. An example key is
`6f963e59e9948d14f5d2eccd5b5ac8e157ca34d70d724b41cb0f565bc01162bf`, which
points to a library of texts. In order for someone else to see the library you
have created, you must provide to them your library’s unique key (by email,
chat, on paper or you could publish it on your website). In short, _you_
manage access to the library by copying that key, and then every key holder
also manages access _ad infinitum_.

At the moment this has its limitations. A Dat is only writable by a single
creator. If you want to collaboratively develop a library or reading list, you
need to have a single administrator managing its contents. This will change in
the near future with the integration of
[hyperdb](https://github.com/mafintosh/hyperdb) into Dat’s core. At that
point, the platform will enable multiple contributors and the management of
permissions, and our single key will become a key chain.

How is this key any different from knowing the domain name of a website? If a
site isn’t indexed by Google and has a suitably unguessable domain name, then
isn’t that effectively the same degree of privacy? Yes, and this is precisely
why the metaphor of the key is so apt (with whom do you share the key to your
apartment?) but also why it is limited. With the key, one not only has the
ability to _enter_ the library, but also to completely _reproduce_ the
library.

##### Consenting Accessibility

When we say “accessibility”, some hear “information wants to be free” — but
our idea of accessibility is not about indiscriminate open access to
everything. While we do support, in many instances, the desire to increase
access to knowledge where it has been restricted by monopoly property
ownership, or the urge to increase transparency in delegated decision-making
and representative government, we also recognise that Indigenous knowledge
traditions often depend on ownership, control, consent, and secrecy in the
hands of the traditions’ people. [see [“Managing Indigenous Knowledge and
Indigenous Cultural and Intellectual
Property”](https://epress.lib.uts.edu.au/system/files_force/Aus%20Indigenous%20Knowledge%20and%20Libraries.pdf?download=1),
pg 83] Accessibility understood in merely quantitative terms isn’t able to
reconcile these positions, which this is why we refuse to limit “access” to a
question of technology.

While “digital rights management” technologies have been developed almost
exclusively for protecting the commercial interests of capitalist property
owners within Western intellectual property regimes, many of the assumptions
and technological implementations are inadequate for the protection of
Indigenous knowledge. Rather than describing access in terms of commodities
and ownership of copyright, it might be defined by membership, status or role
within a community, and the rules of access would not be managed by a
generalised legal system but by the rules and traditions of the people and
their knowledge. [[“The Role of Information Technologies in Indigenous
Knowledge
Management”](https://epress.lib.uts.edu.au/system/files_force/Aus%20Indigenous%20Knowledge%20and%20Libraries.pdf?download=1),
101-102] These rights would not expire, nor would they be bought and sold,
because they are shared, i.e., held in common.

It is important, while imagining the possibilities of a technological
protocol, to also consider how different _cultural protocols_ might be
implemented and protected through the life of a project like Dat Library.
Certain aspects of this might be accomplished through library metadata, but
ultimately it is through people hosting their own archives and libraries
(rather than, for example, having them hosted by a state institution) that
cultural protocols can be translated and reproduced. Perhaps we should flip
the typical question of how might a culture exist within digital networks to
instead ask how should digital networks operate within cultural protocols?

### Adaptability (ability to use/modify as one’s own)

Durability and accessibility are the foundations of adoptability. Many would
say that this is a contradiction, that adoption is about use and
transformation and those qualities operate against the preservationist grain
of durability, that one must always be at the expense of the other. We say:
perhaps that is true, but it is a risk we’re willing to take because we don’t
want to be making monuments and cemeteries that people approach with reverence
or fear. We want tools and stories that we use and adapt and are always making
new again. But we also say: it is through use that something becomes
invaluable, which may change or distort but will not destroy — this is the
practical definition of durability. S.R. Ranganathan’s very first Law of
Library Science was [“BOOKS ARE FOR
USE”](https://babel.hathitrust.org/cgi/pt?id=uc1.$b99721;view=1up;seq=37),
which we would extend to the library itself, such that when he arrives at his
final law, [“THE LIBRARY IS A LIVING
ORGANISM”](https://babel.hathitrust.org/cgi/pt?id=uc1.$b99721;view=1up;seq=432),
we note that to live means not only to change, but also to live _in the
world_.

To borrow and gently distort another concept of Raganathan’s concepts, namely
that of ‘[Infinite
Hospitality](http://www.dextersinister.org/MEDIA/PDF/InfiniteHospitality.pdf)’,
it could be said that we are interested in ways to construct a form of
infrastructure that is infinitely hospitable. By this we mean, infrastructure
that accommodates the needs and desires of new users/audiences/communities and
allows them to enter and contort the technology to their own uses. We really
don’t see infrastructure as aimed at a single specific group, but rather that
it should generate spaces that people can inhabit as they wish. The poet Jean
Paul once wrote that books are thick letters to friends. Books as
infrastructure enable authors to find their friends. This is how we ideally
see Dat Library and HyperReadings working.

## Use cases

We began work on Dat Library and HyperReadings with a range of exemplary use
cases, real-world circumstances in which these projects might intervene. Not
only would the use cases make demands on the software we were and still are
beginning to write, but they would also give us demands to make on the Dat
protocol, which is itself still in the formative stages of development. And,
crucially, in an iterative feedback loop, this process of design produces
transformative effects on those situations described in the use cases
themselves, resulting in further new circumstances and new demands.

### Thorunka

Wendy Bacon and Chris Nash made us aware of Thorunka and Thor.

_Thorunka_ and _Thor_ were two underground papers in the early 1970’s that
spewed out from a censorship controversy surrounding the University of New
South Wales student newspaper _Tharunka_. Between 1971 and 1973, the student
magazine was under focused attack from the NSW state police, with several
arrests made on charges of obscenity and indecency. Rather than ceding to the
charges, this prompted a large and sustained political protest from Sydney
activists, writers, lawyers, students and others, to which _Thorunka_ and
_Thor_ were central.

> “The campaign contested the idea of obscenity and the legitimacy of the
legal system itself. The newspapers campaigned on the war in Vietnam,
Aboriginal land rights, women’s and gay liberation, and the violence of the
criminal justice system. By 1973 the censorship regime in Australia was
broken. Nearly all the charges were dropped.” – [Quotation from the 107
Projects Event](http://107.org.au/event/tharunka-thor-journalism-politics-
art-1970-1973/).

Although the collection of issues of _Tharunka_ is largely accessible [via
Trove](http://trove.nla.gov.au/newspaper/page/24773115), the subsequent issues
of _Thorunka_ , and later _Thor_ , are not. For us, this demonstrates clearly
how collections themselves can encourage modes of reading. If you focus on
_Tharunka_ as a singular and long-standing periodical, this significant
political moment is rendered almost invisible. On the other hand, if the
issues are presented together, with commentary and surrounding publications,
the political environment becomes palpable. Wendy and Chris have kindly
allowed us to make their personal collection available via Dat Library (the
key is: 73fd26846e009e1f7b7c5b580e15eb0b2423f9bea33fe2a5f41fac0ddb22cbdc), so
you can discover this for yourself.

### Academia.edu alternative

Academia.edu, started in 2008, has raised tens of millions of dollars as a
social network for academics to share their publications. As a for-profit
venture, it is rife with metrics and it attempts to capitalise on the innate
competition and self-promotion of precarious knowledge workers in the academy.
It is simultaneously popular and despised: popular because it fills an obvious
desire to share the fruits of ones intellectual work, but despised for the
neoliberal atmosphere that pervades every design decision and automated
correspondence. It is, however, just trying to provide a return on investment.

[Gary Hall has written](http://www.garyhall.info/journal/2015/10/18/does-
academiaedu-mean-open-access-is-becoming-irrelevant.html) that “its financial
rationale rests … on the ability of the angel-investor and venture-capital-
funded professional entrepreneurs who run Academia.edu to exploit the data
flows generated by the academics who use the platform as an intermediary for
sharing and discovering research”. Moreover, he emphasises that in the open-
access world (outside of the exploitative practice of for-profit publishers
like Elsevier, who charge a premium for subscriptions), the privileged
position is to be the one “ _who gate-keeps the data generated around the use
of that content_ ”. This lucrative position has been produced by recent
“[recentralising tendencies](http://commonstransition.org/the-revolution-will-
not-be-decentralised-blockchains/)” of the internet, which in Academia’s case
captures various, scattered open access repositories, personal web pages, and
other archives.

Is it possible to redecentralise? Can we break free of the subjectivities that
Academia.edu is crafting for us as we are interpellated by its infrastructure?
It is incredibly easy for any scholar running Dat Library to make a library of
their own publications and post the key to their faculty web page, Facebook
profile or business card. The tricky — and interesting — thing would be to
develop platforms that aggregate thousands of these libraries in direct
competition with Academia.edu. This way, individuals would maintain control
over their own work; their peer groups would assist in mirroring it; and no
one would be capitalising on the sale of data related to their performance and
popularity.

We note that Academia.edu is a typically centripetal platform: it provides no
tools for exporting one’s own content, so an alternative would necessarily be
a kind of centrifuge.

This alternative is becoming increasingly realistic. With open-access journals
already paving the way, there has more recently been a [call for free and open
access to citation data](https://www.insidehighered.com/news/2017/12/06
/scholars-push-free-access-online-citation-data-saying-they-need-and-deserve-
access). [The Initiative for Open Citations (I4OC)](https://i4oc.org) is
mobilising against the privatisation of data and working towards the
unrestricted availability of scholarly citation data. We see their new
database of citations as making this centrifugal force a possibility.

### Publication format

In writing this README, we have strung together several references. This
writing might be published in a book and the references will be listed as
words at the bottom of the page or at the end of the text. But the writing
might just as well be published as a HyperReadings object, providing the
reader with an archive of all the things we referred to and an editable
version of this text.

A new text editor could be created for this new publication format, not to
mention a new form of publication, which bundles together a set of
HyperReadings texts, producing a universe of texts and references. Each
HyperReadings text might reference others, of course, generating something
that begins to feel like a serverless World Wide Web.

It’s not even necessary to develop a new publication format, as any book might
be considered as a reading list (usually found in the footnotes and
bibliography) with a very detailed description of the relationship between the
consulted texts. What if the history of published works were considered in
this way, such that we might always be able to follow a reference from one
book directly into the pages of another, and so on?

### Syllabus

The syllabus is the manifesto of the twenty-first century. From [Your
Baltimore “Syllabus”](https://apis4blacklives.wordpress.com/2015/05/01/your-
baltimore-syllabus/), to
[#StandingRockSyllabus](https://nycstandswithstandingrock.wordpress.com/standingrocksyllabus/),
to [Women and gender non-conforming people writing about
tech](https://docs.google.com/document/d/1Qx8JDqfuXoHwk4_1PZYWrZu3mmCsV_05Fe09AtJ9ozw/edit),
syllabi are being produced as provocations, or as instructions for
reprogramming imaginaries. They do not announce a new world but they point out
a way to get there. As a programme, the syllabus shifts the burden of action
onto the readers, who will either execute the programme on their own fleshy
operating system — or not. A text that by its nature points to other texts,
the syllabus is already a relational document acknowledging its own position
within a living field of knowledge. It is decidedly not self-contained,
however it often circulates as if it were.

If a syllabus circulated as a HyperReadings document, then it could point
directly to the texts and other media that it aggregates. But just as easily
as it circulates, a HyperReadings syllabus could be forked into new versions:
the syllabus is changed because there is a new essay out, or because of a
political disagreement, or because following the syllabus produced new
suggestions. These forks become a family tree where one can follow branches
and trace epistemological mutations.

## Proposition (or Presuppositions)

While the software that we have started to write is a proposition in and of
itself, there is no guarantee as to _how_ it will be used. But when writing,
we _are_ imagining exactly that: we are making intuitive and hopeful
presuppositions about how it will be used, presuppositions that amount to a
set of social propositions.

### The role of individuals in the age of distribution

Different people have different technical resources and capabilities, but
everyone can contribute to an archive. By simply running the Dat Library
software and adding an archive to it, a person is sharing their disk space and
internet bandwidth in the service of that archive. At first, it is only the
archive’s index (a list of the contents) that is hosted, but if the person
downloads the contents (or even just a small portion of the contents) then
they are sharing in the hosting of the contents as well. Individuals, as
supporters of an archive or members of a community, can organise together to
guarantee the durability and accessibility of an archive, saving a future
UbuWeb from ever having to worry about if their ‘ISP pulling the plug’. As
supporters of many archives, as members of many communities, individuals can
use Dat Library to perform this function many times over.

On the Web, individuals are usually users or browsers — they use browsers. In
spite of the ostensible interactivity of the medium, users are kept at a
distance from the actual code, the infrastructure of a website, which is run
on a server. With a distributed protocol like Dat, applications such as
[Beaker Browser](https://beakerbrowser.com) or Dat Library eliminate the
central server, not by destroying it, but by distributing it across all of the
users. Individuals are then not _just_ users, but also hosts. What kind of
subject is this user-host, especially as compared to the user of the server?
Michel Serres writes in _The Parasite_ :

> “It is raining; a passer-by comes in. Here is the interrupted meal once
more. Stopped for only a moment, since the traveller is asked to join the
diners. His host does not have to ask him twice. He accepts the invitation and
sits down in front of his bowl. The host is the satyr, dining at home; he is
the donor. He calls to the passer-by, saying to him, be our guest. The guest
is the stranger, the interrupter, the one who receives the soup, agrees to the
meal. The host, the guest: the same word; he gives and receives, offers and
accepts, invites and is invited, master and passer-by… An invariable term
through the transfer of the gift. It might be dangerous not to decide who is
the host and who is the guest, who gives and who receives, who is the parasite
and who is the table d’hote, who has the gift and who has the loss, and where
hospitality begins with hospitality.” — Michel Serres, The Parasite (Baltimore
and London: The Johns Hopkins University Press), 15–16.

Serres notes that _guest_ and _host_ are the same word in French; we might say
the same for _client_ and _server_ in a distributed protocol. And we will
embrace this multiplying hospitality, giving and taking without measure.

### The role of institutions in the age of distribution

David Cameron launched a doomed initiative in 2010 called the Big Society,
which paired large-scale cuts in public programmes with a call for local
communities to voluntarily self-organise to provide these essential services
for themselves. This is not the political future that we should be working
toward: since 2010, austerity policies have resulted in [120,000 excess deaths
in England](http://bmjopen.bmj.com/content/7/11/e017722). In other words,
while it might seem as though _institutions_ might be comparable to _servers_
, inasmuch as both are centralised infrastructures, we should not give them up
or allow them to be dismantled under the assumption that those infrastructures
can simply be distributed and self-organised. On the contrary, institutions
should be defended and organised in order to support the distributed protocols
we are discussing.

One simple way for a larger, more established institution to help ensure the
durability and accessibility of diverse archives is through the provision of
hardware, network capability and some basic technical support. It can back up
the archives of smaller institutions and groups within its own community while
also giving access to its own archives so that those collections might be put
to new uses. A network of smaller institutions, separated by great distances,
might mirror each other’s archives, both as an expression of solidarity and
positive redundancy and also as a means of circulating their archives,
histories and struggles amongst each of the others.

It was the simultaneous recognition that some documents are too important to
be privatised or lost to the threats of neglect, fire, mould, insects, etc.,
that prompted the development of national and state archives (See page 39 in
[Beredo, B. C., Import of the archive: American colonial bureaucracy in the
Philippines, 1898-1916](http://hdl.handle.net/10125/101724)). As public
institutions they were, and still are, tasked with often competing efforts to
house and preserve while simultaneously also ensuring access to public
documents. Fire and unstable weather understandably have given rise to large
fire-proof and climate-controlled buildings as centralised repositories,
accompanied by highly regulated protocols for access. But in light of new
technologies and their new risks, as discussed above, it is compelling to
argue now that, in order to fulfil their public duty, public archives should
be distributing their collections where possible and providing their resources
to smaller institutions and community groups.

Through the provision of disk space, office space, grants, technical support
and employment, larger institutions can materially support smaller
organisations, individuals and their archival afterlives. They can provide
physical space and outreach for dispersed collectors, gathering and piecing
together a fragmented archive.

But what happens as more people and collections are brought in? As more
institutional archives are allowed to circulate outside of institutional
walls? As storage is cut loose from its dependency on the corporate cloud and
into forms of interdependency, such as mutual support networks? Could this
open up spaces for new forms of not-quite-organisations and queer-
institutions? These would be almost-organisations that uncomfortable exist
somewhere between the common categorical markings of the individual and the
institution. In our thinking, its not important what these future forms
exactly look like. Rather, as discussed above, what is important to us is that
in writing software we open up spaces for the unknown, and allow others agency
to build the forms that work for them. It is only in such an atmosphere of
infinite hospitality that we see the future of community libraries, individual
collections and other precarious archives.

## A note on this text

This README was, and still is being, collaboratively written in a
[Git](https://en.wikipedia.org/wiki/Git)
[repository](https://en.wikipedia.org/wiki/Repository_\(version_control\)).
Git is a free and open-source tool for version control used in software
development. All the code for Hyperreadings, Dat Library and their numerous
associated modules are managed openly using Git and hosted on GitHub under
open source licenses. In a real way, Git’s specification formally binds our
collaboration as well as the open invitation for others to participate. As
such, the form of this README reflects its content. Like this text, these
projects are, by design, works in progress that are malleable to circumstances
and open to contributions, for example by opening a pull request on this
document or raising an issue on our GitHub repositories.

 

Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.