Murtaugh
A bag but is language nothing of words
2016


## A bag but is language nothing of words

### From Mondotheque

#####

(language is nothing but a bag of words)

[Michael Murtaugh](/wiki/index.php?title=Michael_Murtaugh "Michael Murtaugh")

In text indexing and other machine reading applications the term "bag of
words" is frequently used to underscore how processing algorithms often
represent text using a data structure (word histograms or weighted vectors)
where the original order of the words in sentence form is stripped away. While
"bag of words" might well serve as a cautionary reminder to programmers of the
essential violence perpetrated to a text and a call to critically question the
efficacy of methods based on subsequent transformations, the expression's use
seems in practice more like a badge of pride or a schoolyard taunt that would
go: Hey language: you're nothin' but a big BAG-OF-WORDS.

## Bag of words

In information retrieval and other so-called _machine-reading_ applications
(such as text indexing for web search engines) the term "bag of words" is used
to underscore how in the course of processing a text the original order of the
words in sentence form is stripped away. The resulting representation is then
a collection of each unique word used in the text, typically weighted by the
number of times the word occurs.

Bag of words, also known as word histograms or weighted term vectors, are a
standard part of the data engineer's toolkit. But why such a drastic
transformation? The utility of "bag of words" is in how it makes text amenable
to code, first in that it's very straightforward to implement the translation
from a text document to a bag of words representation. More significantly,
this transformation then opens up a wide collection of tools and techniques
for further transformation and analysis purposes. For instance, a number of
libraries available in the booming field of "data sciences" work with "high
dimension" vectors; bag of words is a way to transform a written document into
a mathematical vector where each "dimension" corresponds to the (relative)
quantity of each unique word. While physically unimaginable and abstract
(imagine each of Shakespeare's works as points in a 14 million dimensional
space), from a formal mathematical perspective, it's quite a comfortable idea,
and many complementary techniques (such as principle component analysis) exist
to reduce the resulting complexity.

What's striking about a bag of words representation, given is centrality in so
many text retrieval application is its irreversibility. Given a bag of words
representation of a text and faced with the task of producing the original
text would require in essence the "brain" of a writer to recompose sentences,
working with the patience of a devoted cryptogram puzzler to draw from the
precise stock of available words. While "bag of words" might well serve as a
cautionary reminder to programmers of the essential violence perpetrated to a
text and a call to critically question the efficacy of methods based on
subsequent transformations, the expressions use seems in practice more like a
badge of pride or a schoolyard taunt that would go: Hey language: you're
nothing but a big BAG-OF-WORDS. Following this spirit of the term, "bag of
words" celebrates a perfunctory step of "breaking" a text into a purer form
amenable to computation, to stripping language of its silly redundant
repetitions and foolishly contrived stylistic phrasings to reveal a purer
inner essence.

## Book of words

Lieber's Standard Telegraphic Code, first published in 1896 and republished in
various updated editions through the early 1900s, is an example of one of
several competing systems of telegraph code books. The idea was for both
senders and receivers of telegraph messages to use the books to translate
their messages into a sequence of code words which can then be sent for less
money as telegraph messages were paid by the word. In the front of the book, a
list of examples gives a sampling of how messages like: "Have bought for your
account 400 bales of cotton, March delivery, at 8.34" can be conveyed by a
telegram with the message "Ciotola, Delaboravi". In each case the reduction of
number of transmitted words is highlighted to underscore the efficacy of the
method. Like a dictionary or thesaurus, the book is primarily organized around
key words, such as _act_ , _advice_ , _affairs_ , _bags_ , _bail_ , and
_bales_ , under which exhaustive lists of useful phrases involving the
corresponding word are provided in the main pages of the volume. [1]

[![Liebers
P1016847.JPG](/wiki/images/4/41/Liebers_P1016847.JPG)](/wiki/index.php?title=File:Liebers_P1016847.JPG)

[![Liebers
P1016859.JPG](/wiki/images/3/35/Liebers_P1016859.JPG)](/wiki/index.php?title=File:Liebers_P1016859.JPG)

[![Liebers
P1016861.JPG](/wiki/images/3/34/Liebers_P1016861.JPG)](/wiki/index.php?title=File:Liebers_P1016861.JPG)

[![Liebers
P1016869.JPG](/wiki/images/f/fd/Liebers_P1016869.JPG)](/wiki/index.php?title=File:Liebers_P1016869.JPG)

> [...] my focus in this chapter is on the inscription technology that grew
parasitically alongside the monopolistic pricing strategies of telegraph
companies: telegraph code books. Constructed under the bywords “economy,”
“secrecy,” and “simplicity,” telegraph code books matched phrases and words
with code letters or numbers. The idea was to use a single code word instead
of an entire phrase, thus saving money by serving as an information
compression technology. Generally economy won out over secrecy, but in
specialized cases, secrecy was also important.[2]

In Katherine Hayles' chapter devoted to telegraph code books she observes how:

> The interaction between code and language shows a steady movement away from
a human-centric view of code toward a machine-centric view, thus anticipating
the development of full-fledged machine codes with the digital computer. [3]

[![Liebers
P1016851.JPG](/wiki/images/1/13/Liebers_P1016851.JPG)](/wiki/index.php?title=File:Liebers_P1016851.JPG)
Aspects of this transitional moment are apparent in a notice included
prominently inserted in the Lieber's code book:

> After July, 1904, all combinations of letters that do not exceed ten will
pass as one cipher word, provided that it is pronounceable, or that it is
taken from the following languages: English, French, German, Dutch, Spanish,
Portuguese or Latin -- International Telegraphic Conference, July 1903 [4]

Conforming to international conventions regulating telegraph communication at
that time, the stipulation that code words be actual words drawn from a
variety of European languages (many of Lieber's code words are indeed
arbitrary Dutch, German, and Spanish words) underscores this particular moment
of transition as reference to the human body in the form of "pronounceable"
speech from representative languages begins to yield to the inherent potential
for arbitrariness in digital representation.

What telegraph code books do is remind us of is the relation of language in
general to economy. Whether they may be economies of memory, attention, costs
paid to a telecommunicatons company, or in terms of computer processing time
or storage space, encoding language or knowledge in any form of writing is a
form of shorthand and always involves an interplay with what one expects to
perform or "get out" of the resulting encoding.

> Along with the invention of telegraphic codes comes a paradox that John
Guillory has noted: code can be used both to clarify and occlude. Among the
sedimented structures in the technological unconscious is the dream of a
universal language. Uniting the world in networks of communication that
flashed faster than ever before, telegraphy was particularly suited to the
idea that intercultural communication could become almost effortless. In this
utopian vision, the effects of continuous reciprocal causality expand to
global proportions capable of radically transforming the conditions of human
life. That these dreams were never realized seems, in retrospect, inevitable.
[5]

[![Liebers
P1016884.JPG](/wiki/images/9/9c/Liebers_P1016884.JPG)](/wiki/index.php?title=File:Liebers_P1016884.JPG)

[![Liebers
P1016852.JPG](/wiki/images/7/74/Liebers_P1016852.JPG)](/wiki/index.php?title=File:Liebers_P1016852.JPG)

[![Liebers
P1016880.JPG](/wiki/images/1/11/Liebers_P1016880.JPG)](/wiki/index.php?title=File:Liebers_P1016880.JPG)

Far from providing a universal system of encoding messages in the English
language, Lieber's code is quite clearly designed for the particular needs and
conditions of its use. In addition to the phrases ordered by keywords, the
book includes a number of tables of terms for specialized use. One table lists
a set of words used to describe all possible permutations of numeric grades of
coffee (Choliam = 3,4, Choliambos = 3,4,5, Choliba = 4,5, etc.); another table
lists pairs of code words to express the respective daily rise or fall of the
price of coffee at the port of Le Havre in increments of a quarter of a Franc
per 50 kilos ("Chirriado = prices have advanced 1 1/4 francs"). From an
archaeological perspective, the Lieber's code book reveals a cross section of
the needs and desires of early 20th century business communication between the
United States and its trading partners.

The advertisements lining the Liebers Code book further situate its use and
that of commercial telegraphy. Among the many advertisements for banking and
law services, office equipment, and alcohol are several ads for gun powder and
explosives, drilling equipment and metallurgic services all with specific
applications to mining. Extending telegraphy's formative role for ship-to-
shore and ship-to-ship communication for reasons of safety, commercial
telegraphy extended this network of communication to include those parties
coordinating the "raw materials" being mined, grown, or otherwise extracted
from overseas sources and shipped back for sale.

## "Raw data now!"

From [La ville intelligente - Ville de la connaissance](/wiki/index.php?title
=La_ville_intelligente_-_Ville_de_la_connaissance "La ville intelligente -
Ville de la connaissance"):

Étant donné que les nouvelles formes modernistes et l'utilisation de matériaux
propageaient l'abondance d'éléments décoratifs, Paul Otlet croyait en la
possibilité du langage comme modèle de « [données
brutes](/wiki/index.php?title=Bag_of_words "Bag of words") », le réduisant aux
informations essentielles et aux faits sans ambiguïté, tout en se débarrassant
de tous les éléments inefficaces et subjectifs.


From [The Smart City - City of Knowledge](/wiki/index.php?title
=The_Smart_City_-_City_of_Knowledge "The Smart City - City of Knowledge"):

As new modernist forms and use of materials propagated the abundance of
decorative elements, Otlet believed in the possibility of language as a model
of '[raw data](/wiki/index.php?title=Bag_of_words "Bag of words")', reducing
it to essential information and unambiguous facts, while removing all
inefficient assets of ambiguity or subjectivity.


> Tim Berners-Lee: [...] Make a beautiful website, but first give us the
unadulterated data, we want the data. We want unadulterated data. OK, we have
to ask for raw data now. And I'm going to ask you to practice that, OK? Can
you say "raw"?

>

> Audience: Raw.

>

> Tim Berners-Lee: Can you say "data"?

>

> Audience: Data.

>

> TBL: Can you say "now"?

>

> Audience: Now!

>

> TBL: Alright, "raw data now"!

>

> [...]

>

> So, we're at the stage now where we have to do this -- the people who think
it's a great idea. And all the people -- and I think there's a lot of people
at TED who do things because -- even though there's not an immediate return on
the investment because it will only really pay off when everybody else has
done it -- they'll do it because they're the sort of person who just does
things which would be good if everybody else did them. OK, so it's called
linked data. I want you to make it. I want you to demand it. [6]

## Un/Structured

As graduate students at Stanford, Sergey Brin and Lawrence (Larry) Page had an
early interest in producing "structured data" from the "unstructured" web. [7]

> The World Wide Web provides a vast source of information of almost all
types, ranging from DNA databases to resumes to lists of favorite restaurants.
However, this information is often scattered among many web servers and hosts,
using many different formats. If these chunks of information could be
extracted from the World Wide Web and integrated into a structured form, they
would form an unprecedented source of information. It would include the
largest international directory of people, the largest and most diverse
databases of products, the greatest bibliography of academic works, and many
other useful resources. [...]

>

> **2.1 The Problem**
> Here we define our problem more formally:
> Let D be a large database of unstructured information such as the World
Wide Web [...] [8]

In a paper titled _Dynamic Data Mining_ Brin and Page situate their research
looking for _rules_ (statistical correlations) between words used in web
pages. The "baskets" they mention stem from the origins of "market basket"
techniques developed to find correlations between the items recorded in the
purchase receipts of supermarket customers. In their case, they deal with web
pages rather than shopping baskets, and words instead of purchases. In
transitioning to the much larger scale of the web, they describe the
usefulness of their research in terms of its computational economy, that is
the ability to tackle the scale of the web and still perform using
contemporary computing power completing its task in a reasonably short amount
of time.

> A traditional algorithm could not compute the large itemsets in the lifetime
of the universe. [...] Yet many data sets are difficult to mine because they
have many frequently occurring items, complex relationships between the items,
and a large number of items per basket. In this paper we experiment with word
usage in documents on the World Wide Web (see Section 4.2 for details about
this data set). This data set is fundamentally different from a supermarket
data set. Each document has roughly 150 distinct words on average, as compared
to roughly 10 items for cash register transactions. We restrict ourselves to a
subset of about 24 million documents from the web. This set of documents
contains over 14 million distinct words, with tens of thousands of them
occurring above a reasonable support threshold. Very many sets of these words
are highly correlated and occur often. [9]

## Un/Ordered

In programming, I've encountered a recurring "problem" that's quite
symptomatic. It goes something like this: you (the programmer) have managed to
cobble out a lovely "content management system" (either from scratch, or using
any number of helpful frameworks) where your user can enter some "items" into
a database, for instance to store bookmarks. After this ordered items are
automatically presented in list form (say on a web page). The author: It's
great, except... could this bookmark come before that one? The problem stems
from the fact that the database ordering (a core functionality provided by any
database) somehow applies a sorting logic that's almost but not quite right. A
typical example is the sorting of names where details (where to place a name
that starts with a Norwegian "Ø" for instance), are language-specific, and
when a mixture of languages occurs, no single ordering is necessarily
"correct". The (often) exascerbated programmer might hastily add an additional
database field so that each item can also have an "order" (perhaps in the form
of a date or some other kind of (alpha)numerical "sorting" value) to be used
to correctly order the resulting list. Now the author has a means, awkward and
indirect but workable, to control the order of the presented data on the start
page. But one might well ask, why not just edit the resulting listing as a
document? Not possible! Contemporary content management systems are based on a
data flow from a "pure" source of a database, through controlling code and
templates to produce a document as a result. The document isn't the data, it's
the end result of an irreversible process. This problem, in this and many
variants, is widespread and reveals an essential backwardness that a
particular "computer scientist" mindset relating to what constitutes "data"
and in particular it's relationship to order that makes what might be a
straightforward question of editing a document into an over-engineered
database.

Recently working with Nikolaos Vogiatzis whose research explores playful and
radically subjective alternatives to the list, Vogiatzis was struck by how
from the earliest specifications of HTML (still valid today) have separate
elements (OL and UL) for "ordered" and "unordered" lists.

> The representation of the list is not defined here, but a bulleted list for
unordered lists, and a sequence of numbered paragraphs for an ordered list
would be quite appropriate. Other possibilities for interactive display
include embedded scrollable browse panels. [10]

Vogiatzis' surprise lay in the idea of a list ever being considered
"unordered" (or in opposition to the language used in the specification, for
order to ever be considered "insignificant"). Indeed in its suggested
representation, still followed by modern web browsers, the only difference
between the two visually is that UL items are preceded by a bullet symbol,
while OL items are numbered.

The idea of ordering runs deep in programming practice where essentially
different data structures are employed depending on whether order is to be
maintained. The indexes of a "hash" table, for instance (also known as an
associative array), are ordered in an unpredictable way governed by a
representation's particular implementation. This data structure, extremely
prevalent in contemporary programming practice sacrifices order to offer other
kinds of efficiency (fast text-based retrieval for instance).

## Data mining

In announcing Google's impending data center in Mons, Belgian prime minister
Di Rupo invoked the link between the history of the mining industry in the
region and the present and future interest in "data mining" as practiced by IT
companies such as Google.

Whether speaking of bales of cotton, barrels of oil, or bags of words, what
links these subjects is the way in which the notion of "raw material" obscures
the labor and power structures employed to secure them. "Raw" is always
relative: "purity" depends on processes of "refinement" that typically carry
social/ecological impact.

Stripping language of order is an act of "disembodiment", detaching it from
the acts of writing and reading. The shift from (human) reading to machine
reading involves a shift of responsibility from the individual human body to
the obscured responsibilities and seemingly inevitable forces of the
"machine", be it the machine of a market or the machine of an algorithm.

From [X = Y](/wiki/index.php?title=X_%3D_Y "X = Y"):

Still, it is reassuring to know that the products hold traces of the work,
that even with the progressive removal of human signs in automated processes,
the workers' presence never disappears completely. This presence is proof of
the materiality of information production, and becomes a sign of the economies
and paradigms of efficiency and profitability that are involved.


The computer scientists' view of textual content as "unstructured", be it in a
webpage or the OCR scanned pages of a book, reflect a negligence to the
processes and labor of writing, editing, design, layout, typesetting, and
eventually publishing, collecting and cataloging [11].

"Unstructured" to the computer scientist, means non-conformant to particular
forms of machine reading. "Structuring" then is a social process by which
particular (additional) conventions are agreed upon and employed. Computer
scientists often view text through the eyes of their particular reading
algorithm, and in the process (voluntarily) blind themselves to the work
practices which have produced and maintain these "resources".

Berners-Lee, in chastising his audience of web publishers to not only publish
online, but to release "unadulterated" data belies a lack of imagination in
considering how language is itself structured and a blindness to the need for
more than additional technical standards to connect to existing publishing
practices.

Last Revision: 2*08*2016

1. ↑ Benjamin Franklin Lieber, Lieber's Standard Telegraphic Code, 1896, New York;
2. ↑ Katherine Hayles, "Technogenesis in Action: Telegraph Code Books and the Place of the Human", How We Think: Digital Media and Contemporary Technogenesis, 2006
3. ↑ Hayles
4. ↑ Lieber's
5. ↑ Hayles
6. ↑ Tim Berners-Lee: The next web, TED Talk, February 2009
7. ↑ "Research on the Web seems to be fashionable these days and I guess I'm no exception." from Brin's [Stanford webpage](http://infolab.stanford.edu/~sergey/)
8. ↑ Extracting Patterns and Relations from the World Wide Web, Sergey Brin, Proceedings of the WebDB Workshop at EDBT 1998,
9. ↑ Dynamic Data Mining: Exploring Large Rule Spaces by Sampling; Sergey Brin and Lawrence Page, 1998; p. 2
10. ↑ Hypertext Markup Language (HTML): "Internet Draft", Tim Berners-Lee and Daniel Connolly, June 1993,
11. ↑

Retrieved from
[https://www.mondotheque.be/wiki/index.php?title=A_bag_but_is_language_nothing_of_words&oldid=8480](https://www.mondotheque.be/wiki/index.php?title=A_bag_but_is_language_nothing_of_words&oldid=8480)

Dekker & Barok
Copying as a Way to Start Something New A Conversation with Dusan Barok about Monoskop
2017


COPYING AS A WAY TO START SOMETHING NEW
A Conversation with Dusan Barok about Monoskop

Annet Dekker

Dusan Barok is an artist, writer, and cultural activist involved
in critical practice in the fields of software, art, and theory. After founding and organizing the online culture portal
Koridor in Slovakia from 1999–2002, in 2003 he co-founded
the BURUNDI media lab where he organized the Translab
evening series. A year later, the first ideas about building an
online platform for texts and media started to emerge and
Monoskop became a reality. More than a decade later, Barok
is well-known as the main editor of Monoskop. In 2016, he
began a PhD research project at the University of Amsterdam. His project, titled Database for the Documentation of
Contemporary Art, investigates art databases as discursive
platforms that provide context for artworks. In an extended
email exchange, we discuss the possibilities and restraints
of an online ‘archive’.
ANNET DEKKER

You started Monoskop in 2004, already some time ago. What
does the name mean?
DUSAN BAROK

‘Monoskop’ is the Slovak equivalent of the English ‘monoscope’, which means an electric tube used in analogue TV
broadcasting to produce images of test cards, station logotypes, error messages but also for calibrating cameras. Monoscopes were automatized television announcers designed to
speak to both live and machine audiences about the status
of a channel, broadcasting purely phatic messages.
AD
Can you explain why you wanted to do the project and how it
developed to what it is now? In other words, what were your
main aims and have they changed? If so, in which direction
and what caused these changes?
DB

I began Monoskop as one of the strands of the BURUNDI
media lab in Bratislava. Originally, it was designed as a wiki
website for documenting media art and culture in the eastern part of Europe, whose backbone consisted of city entries
composed of links to separate pages about various events,

212

LOST AND LIVING (IN) ARCHIVES

initiatives, and individuals. In the early days it was modelled
on Wikipedia (which had been running for two years when
Monoskop started) and contained biographies and descriptions of events from a kind of neutral point of view. Over
the years, the geographic and thematic boundaries have
gradually expanded to embrace the arts and humanities in
their widest sense, focusing primarily on lesser-known
1
phenomena.1 Perhaps the biggest change is the ongoing
See for example
shift from mapping people, events, and places towards
https://monoskop.org/
Features. Accessed
synthesizing discourses.
28 May 2016.
A turning point occurred during my studies at the
Piet Zwart Institute, in the Networked Media programme
from 2010–2012, which combined art, design, software,
and theory with support in the philosophy of open source
and prototyping. While there, I was researching aspects of
the networked condition and how it transforms knowledge,
sociality and economics: I wrote research papers on leaking
as a technique of knowledge production, a critique of the
social graph, and on the libertarian values embedded in the
design of digital currencies. I was ready for more practice.
When Aymeric Mansoux, one of the tutors, encouraged me
to develop my then side-project Monoskop into a graduation
work, the timing was good.
The website got its own domain, a redesign, and most
crucially, the Monoskop wiki was restructured from its
2
focus on media art and culture towards the much wider
https://monoskop.org/
embrace
of the arts and humanities. It turned to a media
Symposium. Accessed
28 May 2016.
library of sorts. The graduation work also consisted of
a symposium about personal collecting and media ar3
chiving,2 which saw its loose follow-ups on media aeshttps://monoskop.org/
thetics (in Bergen)3 and on knowledge classification and
The_Extensions_of_
Many. Accessed
archives (in Mons)4 last year.
28 May 2016.

AD

https://monoskop.org/
Ideographies_of_
Knowledge. Accessed
28 May 2016.

Did you have a background in library studies, or have
you taken their ideas/methods of systemization and categorization (meta data)? If not, what are your methods
and how did you develop them?

213

COPYING AS A WAY TO START SOMETHING NEW

4

been an interesting process, clearly showing the influence
of a changing back-end system. Are you interested in the
idea of sharing and circulating texts as a new way not just
of accessing and distributing but perhaps also of production—and publishing? I’m thinking how Aaaaarg started as
a way to share and exchange ideas about a text. In what
way do you think Monoskop plays (or could play) with these
kinds of mechanisms? Do you think it brings out a new
potential in publishing?

DB

Besides the standard literature in information science (I
have a degree in information technologies), I read some
works of documentation scientists Paul Otlet and Suzanne
Briet, historians such as W. Boyd Rayward and Ronald E.
Day, as well as translated writings of Michel Pêcheux and
other French discourse analysts of the 1960s and 1970s.
This interest was triggered in late 2014 by the confluence
of Femke’s Mondotheque project and an invitation to be an
artist-in-residence in Mons in Belgium at the Mundaneum,
home to Paul Otlet’s recently restored archive.
This led me to identify three tropes of organizing and
navigating written records, which has guided my thinking
about libraries and research ever since: class, reference,
and index. Classification entails tree-like structuring, such
as faceting the meanings of words and expressions, and
developing classification systems for libraries. Referencing
stands for citations, hyperlinking and bibliographies. Indexing ranges from the listing of occurrences of selected terms
to an ‘absolute’ index of all terms, enabling full-text search.
With this in mind, I have done a number of experiments.
There is an index of selected persons and terms from
5
across the Monoskop wiki and Log.5 There is a growing
https://monoskop.org/
list of wiki entries with bibliographies and institutional
Index. Accessed
28 May 2016.
infrastructures of fields and theories in the humanities.6
There is a lexicon aggregating entries from some ten
6
dictionaries of the humanities into a single page with
https://monoskop.org/
hyperlinks to each full entry (unpublished). There is an
Humanities. Accessed
28 May 2016.
alternative interface to the Monoskop Log, in which entries are navigated solely through a tag cloud acting as
a multidimensional filter (unpublished). There is a reader
containing some fifty books whose mutual references are
turned into hyperlinks, and whose main interface consists
of terms specific to each text, generated through tf-idf algorithm (unpublished). And so on.

DB

The publishing market frames the publication as a singular
body of work, autonomous from other titles on offer, and
subjects it to the rules of the market—with a price tag and
copyright notice attached. But for scholars and artists, these
are rarely an issue. Most academic work is subsidized from
public sources in the first place, and many would prefer to
give their work away for free since openness attracts more
citations. Why they opt to submit to the market is for quality
editing and an increase of their own symbolic value in direct
proportion to the ranking of their publishing house. This
is not dissimilar from the music industry. And indeed, for
many the goal is to compose chants that would gain popularity across academia and get their place in the popular
imagination.
On the other hand, besides providing access, digital
libraries are also fit to provide context by treating publications as a corpus of texts that can be accessed through an
unlimited number of interfaces designed with an understanding of the functionality of databases and an openness
to the imagination of the community of users. This can
be done by creating layers of classification, interlinking
bodies of texts through references, creating alternative
indexes of persons, things and terms, making full-text
search possible, making visual search possible—across
the whole of corpus as well as its parts, and so on. Isn’t
this what makes a difference? To be sure, websites such
as Aaaaarg and Monoskop have explored only the tip of

AD

Indeed, looking at the archive in many alternative ways has

214

LOST AND LIVING (IN) ARCHIVES

215

COPYING AS A WAY TO START SOMETHING NEW

the iceberg of possibilities. There is much more to tinker
and hack around.

within a given text and within a discourse in which it is
embedded. What is specific to digital text, however, is that
we can search it in milliseconds. Full-text search is enabled
by the index—search engines operate thanks to bots that
assign each expression a unique address and store it in a
database. In this respect, the index usually found at the
end of a printed book is something that has been automated
with the arrival of machine search.
In other words, even though knowledge in the age of the
internet is still being shaped by the departmentalization of
academia and its related procedures and rituals of discourse
production, and its modes of expression are centred around
the verbal rhetoric, the flattening effects of the index really
transformed the ways in which we come to ‘know’ things.
To ‘write’ a ‘book’ in this context is to produce a searchable
database instead.

AD

It is interesting that whilst the accessibility and search potential has radically changed, the content, a book or any other
text, is still a particular kind of thing with its own characteristics and forms. Whereas the process of writing texts seems
hard to change, would you be interested in creating more
alliances between texts to bring out new bibliographies? In
this sense, starting to produce new texts, by including other
texts and documents, like emails, visuals, audio, CD-ROMs,
or even un-published texts or manuscripts?
DB

Currently Monoskop is compiling more and more ‘source’
bibliographies, containing digital versions of actual texts
they refer to. This has been very much in focus in the past
two or three years and Monoskop is now home to hundreds
of bibliographies of twentieth-century artists, writers, groups,
and movements as well as of various theories and human7
ities disciplines.7 As the next step I would like to move
See for example
on to enabling full-text search within each such biblioghttps://monoskop.
org/Foucault,
raphy. This will make more apparent that the ‘source’
https://monoskop.
bibliography
is a form of anthology, a corpus of texts
org/Lissitzky,
https://monoskop.
representing a discourse. Another issue is to activate
org/Humanities.
cross-references
within texts—to turn page numbers in
All accessed
28 May 2016.
bibliographic citations inside texts into hyperlinks leading
to other texts.
This is to experiment further with the specificity of digital text. Which is different both to oral speech and printed
books. These can be described as three distinct yet mutually
encapsulated domains. Orality emphasizes the sequence
and narrative of an argument, in which words themselves
are imagined as constituting meaning. Specific to writing,
on the other hand, is referring to the written record; texts
are brought together by way of references, which in turn
create context, also called discourse. Statements are ‘fixed’
to paper and meaning is constituted by their contexts—both

216

LOST AND LIVING (IN) ARCHIVES

AD

So, perhaps we finally have come to ‘the death of the author’,
at least in so far as that automated mechanisms are becoming active agents in the (re)creation process. To return to
Monoskop in its current form, what choices do you make
regarding the content of the repositories, are there things
you don’t want to collect, or wish you could but have not
been able to?
DB

In a sense, I turned to a wiki and started Monoskop as
a way to keep track of my reading and browsing. It is a
by-product of a succession of my interests, obsessions, and
digressions. That it is publicly accessible is a consequence
of the fact that paper notebooks, text files kept offline and
private wikis proved to be inadequate at the moment when I
needed to quickly find notes from reading some text earlier.
It is not perfect, but it solved the issue of immediate access
and retrieval. Plus there is a bonus of having the body of
my past ten or twelve years of reading mutually interlinked
and searchable. An interesting outcome is that these ‘notes’
are public—one is motivated to formulate and frame them

217

COPYING AS A WAY TO START SOMETHING NEW

as to be readable and useful for others as well. A similar
difference is between writing an entry in a personal diary
and writing a blog post. That is also why the autonomy
of technical infrastructure is so important here. Posting
research notes on Facebook may increase one’s visibility
among peers, but the ‘terms of service’ say explicitly that
anything can be deleted by administrators at any time,
without any reason. I ‘collect’ things that I wish to be able
to return to, to remember, or to recollect easily.
AD

Can you describe the process, how do you get the books,
already digitized, or do you do a lot yourself? In other words,
could you describe the (technical) process and organizational aspects of the project?
DB

In the beginning, I spent a lot of time exploring other digital
libraries which served as sources for most of the entries on
Log (Gigapedia, Libgen, Aaaaarg, Bibliotik, Scribd, Issuu,
Karagarga, Google filetype:pdf). Later I started corresponding with a number of people from around the world (NYC,
Rotterdam, Buenos Aires, Boulder, Berlin, Ploiesti, etc.) who
contribute scans and links to scans on an irregular basis.
Out-of-print and open-access titles often come directly from
authors and publishers. Many artists’ books and magazines
were scraped or downloaded through URL manipulation
from online collections of museums, archives and libraries.
Needless to say, my offline archive is much bigger than
what is on Monoskop. I tend to put online the files I prefer
not to lose. The web is the best backup solution I have
found so far.
The Monoskop wiki is open for everyone to edit; any user
can upload their own works or scans and many do. Many of
those who spent more time working on the website ended up
being my friends. And many of my friends ended up having
an account as well :). For everyone else, there is no record
kept about what one downloaded, what one read and for
how long... we don’t care, we don’t track.

218

LOST AND LIVING (IN) ARCHIVES

AD

In what way has the larger (free) publishing context changed
your project, there are currently several free texts sharing
initiatives around (some already before you started like Textz.
com or Aaaaarg), how do you collaborate, or distinguish
from each other?
DB

It should not be an overstatement to say that while in the
previous decade Monoskop was shaped primarily by the
‘media culture’ milieu which it intended to document, the
branching out of its repository of highlighted publications
Monoskop Log in 2009, and the broadening of its focus to
also include the whole of the twentieth and twenty-first
century situates it more firmly in the context of online
archives, and especially digital libraries.
I only got to know others in this milieu later. I approached
Sean Dockray in 2010, Marcell Mars approached me the
following year, and then in 2013 he introduced me to Kenneth Goldsmith. We are in steady contact, especially through
public events hosted by various cultural centres and galleries.
The first large one was held at Ljubljana’s hackerspace Kiberpipa in 2012. Later came the conferences and workshops
organized by Kuda at a youth centre in Novi Sad (2013), by
the Institute of Network Cultures at WORM, Rotterdam (2014),
WKV and Akademie Schloss Solitude in Stuttgart (2014),
Mama & Nova Gallery in Zagreb (2015), ECC at Mundaneum,
Mons (2015), and most recently by the Media Department
8
of the University of Malmo (2016).8
For more information see,
The leitmotif of all these events was the digital library
https://monoskop.org/
Digital_libraries#
and their atmosphere can be described as the spirit of
Workshops_and_
early
hacker culture that eventually left the walls of a
conferences.
Accessed 28 May 2016.
computer lab. Only rarely there have been professional
librarians, archivists, and publishers among the speakers, even though the voices represented were quite diverse.
To name just the more frequent participants... Marcell
and Tom Medak (Memory of the World) advocate universal
access to knowledge informed by the positions of the Yugoslav

219

COPYING AS A WAY TO START SOMETHING NEW

Marxist school Praxis; Sean’s work is critical of the militarization and commercialization of the university (in the
context of which Aaaaarg will always come as secondary, as
an extension of The Public School in Los Angeles); Kenneth
aims to revive the literary avant-garde while standing on the
shoulders of his heroes documented on UbuWeb; Sebastian
Lütgert and Jan Berger are the most serious software developers among us, while their projects such as Textz.com and
Pad.ma should be read against critical theory and Situationist cinema; Femke Snelting has initiated the collaborative
research-publication Mondotheque about the legacy of the
early twentieth century Brussels-born information scientist
Paul Otlet, triggered by the attempt of Google to rebrand him
as the father of the internet.
I have been trying to identify implications of the digital-networked textuality for knowledge production, including humanities research, while speaking from the position
of a cultural worker who spent his formative years in the
former Eastern Bloc, experiencing freedom as that of unprecedented access to information via the internet following
the fall of Berlin Wall. In this respect, Monoskop is a way
to bring into ‘archival consciousness’ what the East had
missed out during the Cold War. And also more generally,
what the non-West had missed out in the polarized world,
and vice versa, what was invisible in the formal Western
cultural canons.
There have been several attempts to develop new projects,
and the collaborative efforts have materialized in shared
infrastructure and introductions of new features in respective platforms, such as PDF reader and full-text search on
Aaaaarg. Marcell and Tom along with their collaborators have
been steadily developing the Memory of the World library and
Sebastian resuscitated Textz.com. Besides that, there are
overlaps in titles hosted in each library, and Monoskop bibliographies extensively link to scans on Libgen and Aaaaarg,
while artists’ profiles on the website link to audio and video
recordings on UbuWeb.

220

LOST AND LIVING (IN) ARCHIVES

AD

It is interesting to hear that there weren’t any archivist or
professional librarians involved (yet), what is your position
towards these professional and institutional entities and
persons?
DB

As the recent example of Sci-Hub showed, in the age of
digital networks, for many researchers libraries are primarily free proxies to corporate repositories of academic
9
journals.9 Their other emerging role is that of a digital
For more information see,
repository of works in the public domain (the role piowww.sciencemag.org/
news/2016/04/whosneered in the United States by Project Gutenberg and
downloading-piratedInternet Archive). There have been too many attempts
papers-everyone.
Accessed 28 May 2016.
to transpose librarians’ techniques from the paperbound
world into the digital domain. Yet, as I said before, there
is much more to explore. Perhaps the most exciting inventive approaches can be found in the field of classics, for
example in the Perseus Digital Library & Catalog and the
Homer Multitext Project. Perseus combines digital editions
of ancient literary works with multiple lexical tools in a way
that even a non-professional can check and verify a disputable translation of a quote. Something that is hard to
imagine being possible in print.
AD

I think it is interesting to see how Monoskop and other
repositories like it have gained different constituencies
globally, for one you can see the kind of shift in the texts
being put up. From the start you tried to bring in a strong
‘eastern European voice’, nevertheless at the moment the
content of the repository reflects a very western perspective on critical theory, what are your future goals. And do
you think it would be possible to include other voices? For
example, have you ever considered the possibility of users
uploading and editing texts themselves?
DB

The site certainly started with the primary focus on east-central European media art and culture, which I considered

221

COPYING AS A WAY TO START SOMETHING NEW

myself to be part of in the early 2000s. I was naive enough
to attempt to make a book on the theme between 2008–2010.
During that period I came to notice the ambivalence of the
notion of medium in an art-historical and technological
sense (thanks to Florian Cramer). My understanding of
media art was that it is an art specific to its medium, very
much in Greenbergian terms, extended to the more recent
‘developments’, which were supposed to range from neo-geometrical painting through video art to net art.
At the same time, I implicitly understood art in the sense
of ‘expanded arts’, as employed by the Fluxus in the early
1960s—objects as well as events that go beyond the (academic) separation between the arts to include music, film,
poetry, dance, design, publishing, etc., which in turn made
me also consider such phenomena as experimental film,
electro-acoustic music and concrete poetry.
Add to it the geopolitically unstable notion of East-Central
Europe and the striking lack of research in this area and
all you end up with is a headache. It took me a while to
realize that there’s no point even attempting to write a coherent narrative of the history of media-specific expanded
arts of East-Central Europe of the past hundred years. I
ended up with a wiki page outlining the supposed mile10
stones along with a bibliography.10
https://monoskop.
For this strand, the wiki served as the main notebook,
org/CEE. Accessed
28 May 2016. And
leaving behind hundreds of wiki entries. The Log was
https://monoskop.
more or less a ‘log’ of my research path and the presence
org/Central_and_
Eastern_Europe_
of ‘western’ theory is to a certain extent a by-product of
Bibliography.
my search for a methodology and theoretical references.
Accessed 28 May 2016.
As an indirect outcome, a new wiki section was
launched recently. Instead of writing a history of mediaspecific ‘expanded arts’ in one corner of the world, it takes
a somewhat different approach. Not a sequential text, not
even an anthology, it is an online single-page annotated
index, a ‘meta-encyclopaedia’ of art movements and styles,
intended to offer an expansion of the art-historical canonical
prioritization of the western painterly-sculptural tradition

222

LOST AND LIVING (IN) ARCHIVES

11

https://monoskop.
org/Art. Accessed
28 May 2016.

to also include other artists and movements around the
world.11
AD

Can you say something about the longevity of the project?
You briefly mentioned before that the web was your best
backup solution. Yet, it is of course known that websites
and databases require a lot of maintenance, so what will
happen to the type of files that you offer? More and more
voices are saying that, for example, the PDF format is all
but stable. How do you deal with such challenges?
DB

Surely, in the realm of bits, nothing is designed to last
forever. Uncritical adoption of Flash had turned out to be
perhaps the worst tragedy so far. But while there certainly
were more sane alternatives if one was OK with renouncing its emblematic visual effects and aesthetics that went
with it, with PDF it is harder. There are EPUBs, but scholarly publications are simply unthinkable without page
numbers that are not supported in this format. Another
challenge the EPUB faces is from artists' books and other
design- and layout-conscious publications—its simplified
HTML format does not match the range of possibilities for
typography and layout one is used to from designing for
paper. Another open-source solution, PNG tarballs, is not
a viable alternative for sharing books.
The main schism between PDF and HTML is that one represents the domain of print (easily portable, and with fixed
page size), while the other the domain of web (embedded
within it by hyperlinks pointing both directions, and with
flexible page size). EPUB is developed with the intention of
synthetizing both of them into a single format, but instead
it reduces them into a third container, which is doomed to
reinvent the whole thing once again.
It is unlikely that there will appear an ultimate convertor
between PDF and HTML, simply because of the specificities
of print and the web and the fact that they overlap only in
some respects. Monoskop tends to provide HTML formats

223

COPYING AS A WAY TO START SOMETHING NEW

next to PDFs where time allows. And if the PDF were to
suddenly be doomed, there would be a big conversion party.
On the side of audio and video, most media files on
Monoskop are in open formats—OGG and WEBM. There
are many other challenges: keeping up-to-date with PHP
and MySQL development, with the MediaWiki software
and its numerous extensions, and the mysterious ICANN
organization that controls the web domain.

as an imperative to us to embrace redundancy, to promote
spreading their contents across as many nodes and sites
as anyone wishes. We may look at copying not as merely
mirroring or making backups, but opening up for possibilities to start new libraries, new platforms, new databases.
That is how these came about as well. Let there be Zzzzzrgs,
Ůbuwebs and Multiskops.

AD

What were your biggest challenges beside technical ones?
For example, have you ever been in trouble regarding copyright issues, or if not, how would you deal with such a
situation?
DB

Monoskop operates on the assumption of making transformative use of the collected material. The fact of bringing
it into certain new contexts, in which it can be accessed,
viewed and interpreted, adds something that bookstores
don’t provide. Time will show whether this can be understood as fair use. It is an opt-out model and it proves to
be working well so far. Takedowns are rare, and if they are
legitimate, we comply.
AD

Perhaps related to this question, what is your experience
with users engagement? I remember Sean (from Aaaaarg,
in conversation with Matthew Fuller, Mute 2011) saying
that some people mirror or download the whole site, not
so much in an attempt to ‘have everything’ but as a way
to make sure that the content remains accessible. It is a
conscious decision because one knows that one day everything might be taken down. This is of course particularly
pertinent, especially since while we’re doing this interview
Sean and Marcell are being sued by a Canadian publisher.
DB

That is absolutely true and any of these websites can disappear any time. Archives like Aaaaarg, Monoskop or UbuWeb
are created by makers rather than guardians and it comes

224

LOST AND LIVING (IN) ARCHIVES

225

COPYING AS A WAY TO START SOMETHING NEW

Bibliography
Fuller, Matthew. ‘In the Paradise of Too Many Books: An Interview with
Sean Dockray’. Mute, 4 May 2011. www.metamute.org/editorial/

articles/paradise-too-many-books-interview-seandockray. Accessed 31 May 2016.
Online digital libraries
Aaaaarg, http://aaaaarg.fail.
Bibliotik, https://bibliotik.me.
Issuu, https://issuu.com.
Karagarga, https://karagarga.in.
Library Genesis / LibGen, http://gen.lib.rus.ec.
Memory of the World, https://library.memoryoftheworld.org.
Monoskop, https://monoskop.org.
Pad.ma, https://pad.ma.
Scribd, https://scribd.com.
Textz.com, https://textz.com.
UbuWeb, www.ubu.com.

226

LOST AND LIVING (IN) ARCHIVES

227

COPYING AS A WAY TO START SOMETHING NEW
 

Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.