searching in Barok 2014


lting with a database storing documents. The
phenomena such as [deepening] of specialization and throughout digitization
[have given] privilege to the database as [a|the] [fundamental] means for
research. Obviously, this is a very recent [phenomenon]. Queries were once
formulated in natural language; now, given the fact that databases are queried
[using] SQL language, their interfaces are mere extensions of it and
researchers pose their questions by manipulating dropdowns, checkboxes and
input boxes mashed together on a flat screen being ran by software that in
turn translates them into a long line of conditioned _SELECTs_ and _JOINs_
performed on tables of data.

Specialization, digitization and networking have changed the language of
questioning. Inquiry, once attached to the flesh and paper has been
[entrusted] to the digital and networked. Researchers are querying the black
box.

C

Searching in a collection of [amassed/assembled] [tangible] documents (ie.
bookshelf) is different from searching in a systematically structured
repository (library) and even more so from searching in a digital repository
(digital library). Not that they are mutually exclusive. One can devise
structures and algorithms to search through a printed text, or read books in a
library one by one. They are rather [models] [embodying] various [processes]
associated with the query. These properties of the query might be called [the
sequence], the structure and the index. If they are present in the ways of
querying documents, and we will return to this issue, are they persistent
within the inquiry as such? [wait]

D

This question itself is a rupture in the sequence. It makes a demand to depart
from one narrative [a continuous flow of words] to another, to figure out,
while remaining bound to it [it would be even more as a so-called rhetorical
question]. So there has been one sequence, or line, of the inquiry--about the
kinds of the query and its properties. That sequence itself is a digressi


ts weights,
while words that seem specific to the book outweights other even if they don't
occur very often. A selection of these words then serves as a descriptor of
the whole text, and can be thought of as a specific kind of 'tags'.

This process was formalized in a mathematical function in the 1970s, thanks to
a formula by Karen Spärck Jones which she entitled 'inverse document
frequency' (IDF), or in other words, "term specificity". It is measured as a
proportion of texts in the corpus where the word appears at least once to the
total number of texts. When multiplied by the frequency of the word _in_ the
text (divided by the maximum frequency of any word in the text), we get _term
frequency-inverse document frequency_ (tf-idf). In this way we can get an
automated list of subjects which are particular in the text when compared to a
group of texts.

We came to learn it by practice of searching the web. It is a mechanism not
dissimilar to thought process involved in retrieving particular information
online. And search engines have it built in their indexing algorithms as well.

There is a paper proposing attaching words generated by tf-idf to the
hyperlinks when referring websites 14(http://bscit.berkeley.edu/cgi-
bin/pl_dochome?query_src=&format=html&collection=Wilensky_papers&id=3&show_doc=yes).
This would enable finding the referred content even after the link is dead.
Hyperlinks in references in the paper use this feature and it can be easily
tested: 15(http://www.cs.berkeley.edu/~phelps/papers/dissertation-
abstract.html?lexical-
signature=notemarks+multivalent+semantically+franca+stylized).

There is another measure, cosine similarity, which takes tf-idf further and
can be applied for clustering texts according to similarities in their
specificity. This might be i


searching in Barok 2014


ia. The
difference is its scale, reach and technique.
One of the messages of the recent “revelations” is that while it is recommended
to encrypt private communication, the internet is for its users also a medium of
direct contact with power. SEO, or search engine optimization, is now as relevant technique for websites as for books and other publications since all of them
are read by similar algorithms, and authors can read this situation as a political
dimension of their work, as a challenge to transform and model these algorithms
by texts.

2

II. Techniques of research in the humanities literature
Compiling the bibliography
Through the circuitry we got to the audience, readers. Today, they also include
software and algorithms such as those used for “reading” by information agencies
and corporations, and others facilitating reading for the so called ordinary reader,
the reader searching information online, but also the “expert” reader, searching
primarily in library systems.
Libraries, as we said, are different from information agencies in that they are
funded by the public not to hide publications from it but to provide access to
them. A telling paradox of the age is that on the one hand information agencies
are storing almost all contemporary book production in its electronic version,
while generally they absolutely don’t care about them since the “signal” information lies elsewhere, and on the other in order to provide electronic access, paid or
direct, libraries have to costly scan also publications that were prepared for print
electronically.
A more remarkable difference is, of course, that libraries select and catalogize
publications.
Their methods of selection are determined in the first place by their public institutional function of the protector and projector of patriotic values, and it is reflected
in their pref


of the oeuvre of
the author. But consider an author working on an article who in the early phase
of his research needs to prepare a bibliography on the activity of Fluxus in central Europe or on the use of documentary film in education. Such research cuts
through national boundaries and/or branches of disciplines and he is left to travel
not only to locate artefacts, protagonists and experts in the field but also to find
literature, which in turn makes even the mere process of compiling bibliography
relatively demanding and costly activity.
3

In this sense, the digitization of publications and archival material, providing their
free online access and enabling fulltext search, in other words “open access”, catalyzes research across political-geographical and disciplinary configurations. Because while the index of the printed book contains only selected terms and for
the purposes of searching the index across several books the researcher has to have
them all at hand, the software-enabled search in digitized texts (with a good OCR)
works with the index of every single term in all of them.
This kind of research also obviously benefits from online translation tools, multilingual case bibliographies online, as well as second hand bookstores and small
specialized libraries that provide a corrective role to public ones, and whose “open
access” potential has been explored to the very small extent until now, but which
I won’t discuss here further for the lack of time.
Writing
The disciplinarity and patriotism are “embedded” in texts themselves, while I repeat that I don’t say this in a pejorative way.
Bibliographic records in bodies of texts, notes, attributions of sources and appended references can be read as formatted addresses of other texts, making apparent a kind of


searching in Barok 2018


dow_Libraries.jpg
/500px-
Liang_Lawrence_2012_Shadow_Libraries.jpg)](http://www.e-flux.com/journal/37/61228
/shadow-libraries/)

In the essay, he moves between identifying Library.nu as digital Alexandria
and as its shadow.
In this account, even large libraries exist in the shadows cast by their
monumental precedessors.
There’s a lineage, there’s a tradition.

Almost everyone and every institution has a library, small or large.
They’re not necessarily Alexandrias, but they strive to stay relevant.
Take the University of Amsterdam where I now work.
University libraries are large, but they’re hardly _large enough_.
The publishing market is so huge that you simply can’t keep up with all the
niche little disciplines.
So either you have to wait days or weeks for a missing book to be ordered
somewhere.
Or you have some EBSCO ebooks.
And most of the time if you’re searching for a book title in the catalogue,
all you get are its reviews in various journals the library subscribes to.

So my colleagues keep asking me.
Dušan, where do I find this or that book?
You need to scan through dozens of texts, check one page in that book, table
of contents of another book, read what that paper is about.

[![Arts humanities and social sciences digital libraries
2018.jpg](/images/thumb/8/81/Arts_humanities_and_social_sciences_digital_libraries_2018.jpg
/500px-
Arts_humanities_and_social_sciences_digital_libraries_2018.jpg)](/Digital_libraries#Libraries
"Digital libraries#Libraries")

Well, just look _online_.

So what do digital libraries do?

[![Hand writing.jpg](/images/thumb/a/a2/Hand_writing.jpg/500px-
Hand_writing.jpg)](/File:Hand_writing.jpg)

You write a manuscript, have it published,

[![Scanning hand.jpg](/images/thumb/4/48/Scanning_hand.jpg/500px-
Scannin


searching in Bodo 2014


without authorization were purged from the
budding digital collections. Those that survived complete deletion were moved into the dark, locked
down sections of digital libraries that sometimes still lurk behind the law-abiding public façades. Hopes
for a universal digital library can be built was lost in just a few short years as those who tried it (such as
Google or Hathitrust) got bogged down in endless court battles.
There are unauthorized texts collections circulating on channels less susceptible to enforcement, such as
DVDs, torrents, or IRC channels. But the technical conditions of these distribution channels do not enable
the development of a library. Two of the most essential attributes of any proper library: the catalogue
and the community are hard to provide on such channels. The catalog doesn’t just organize the
knowledge stored in the collection; it is not just a tool of searching and browsing. It is a critical
component in the organization of the community of “librarians” who preserve and nourish the
collection. The catalog is what distinguishes an unstructured heap of computer files from a wellmaintained library, but it is the same catalog, which makes shadow libraries, unauthorized texts
collections an easy target of law enforcement. Those few digital online libraries that dare to provide
unauthorized access to texts in an organized manner, such as textz.org, a*.org, monoskop or Gigapedia/
library.nu, all had their bad experiences with law enforcement and rights holder dismay.
Of these pirate libraries, Gigapedia—later called Library.nu—was the largest at the turn of the 2010’s. At
its peak, it was several orders of magnitudes bigger than its peers, offering access to nearly a million
English language documents. It was not just size that made Gigapedi


searching in Constant 2009


n to be displaced

figure 126

figure 125

Wearing the video library, performer Isabelle Bats presents a selection of films related to the themes of V/J10. As a living memory, the
discs and media players in the video library are embedded in a dress
designed by artists collective De Geuzen. Isabelle embodies an accessible interface between you (the viewer), and the videos. This human
interface allows for a mutual relationship: viewing the films influences
the experience of other parts of the program, and the situation and
context in which you watch the films play a role in experiencing and
interpreting the videos. A physical exchange between existing imagery, real-time interpretation, experiences and context, emerges as
a result.
The V/J10 video library collects excerpts of performance and dance
video art, and (documentary) film, which reflect upon our complex
body–technique relations. Searching for the indicating, probing, disturbing or subverting gesture(s) in the endless feedback loop between
technology, tools, data and bodies, we collected historical as well as
contemporary material for this temporary archive.

Modern Times or the Assembly Line
Reflects the body in work environments, which are structured by
technology, ranging from the pre-industrial manual work with analogue
tools, to the assembly line, to postmodern surveillance configurations.
24 Portraits
Excerpt from a series of documentary portraits by Alain Cavalier, FR,
1988-1991.

umentaries paying tribute to women's
manual work. The intriguing and sensitive portraits of 24 women working
in different trades reveal the intimacy
of bodies and their working tools.

24 Portraits is a series of short doc-

198

198

198

199

199

Humain, trop humain
Quotes from a documentary by Louis
Malle, FR, 1972.
A documentary film


repetition, but
it is also always undone:
As Laurie Anderson says:
“You're walking. And you don't always realize it, but you're always
falling. With each step you fall forward slightly. And then catch yourself from falling.
Over and over,
you're falling.
And then catching
your self from falling.” (Quoted after
Gabriele Brandstetter, ReMembering
the Body)
William Forsythe, for instance, considers classical ballet as a historical
form of a knowledge system loaded

with ideologies about society, the self,
the body, rather than a fixed set
of rules, which simply can be implemented. An arabesque is a platonic ideal for him, a prescription,
but it can't be danced: “There is
no arabesque, there is only everyone's arabesque.” His choreography
is concerned with remembering and
forgetting: referencing classical ballet, creating a geometrical alphabet,
which expands the classical form, and
searching for the moment of forgetfulness, where new movement can arise.
Over the years, he and his company
developed an understanding of dance
as a complex system of processing information with some analogies to computer programming.

Chance favours
pared mind

the

pre-

Educational dance film, produced by
Vlaams Theaterinstituut, Ministerie
van Onderwijs dienst Media and Informatie, dir. Anne Quirynen, 1990,

201

201

201

202

202

Rehearsal Last Supper

25 min.
Chance favours the prepared mind
features discussions and demonstrations by William Forsythe and four
Frankfurt Ballet Dancers about their
understanding of movement and their
working methods: “Dance is like writing or drawing, some sort of inscription.” (William Forsythe)

The way of the weed
Experimental dance film featuring
William Forsythe, Thomas McManus
and dancers of the Frankfurt Ballet,
An-Marie Lambrechts, Peter Missotte


searching in Constant 2015


mean – we could migrate completely to OSS
tools, but it’s a slow progress. Mainly because people (students) need (and
want) to be trained in the same commercial applications as the ones they
will encounter in their professional life.
How did Linux enter the design lab? How did that start?

It started with a personal curiosity, but also for economical reasons. Our
school can’t afford to acquire all the software licenses we’d like. For example, we can’t justify to pay approx. 100 x 10 licenses, just to implement
1
2

http://www.typeforge.net/
http://www.fba.up.pt/

275

the educational version of Fontlab on some of our computers; especially because this package is only used by a part of our second year design students.
You can image what the total budget will be with all the other needs ... I
personally believe that we can find everything we need on the web. It’s a
matter of searching long enough! So this is how I was very happy to find
Fontforge. An Open Source tool that is solid enough to use in education
and can produce (as far as I have been able to test) almost professional results in font development. At first I couldn’t grasp how to use it under X 3
on Windows, so one day I set out to try and do it on Linux ... and one thing
lead to another ...

What got you into using OSS? Was it all one thing leading to another?

Uau ... can’t remember ... I believe it had to do with my first experiences
online; I don’t think I knew the concept before 2000. I mean I’ve started
using the web (IRC and basic browsing) in 1999, but I think it had to do
with the search of newer and better tools ...
I think I also started to get into it around that time. But I think I was
more interested in copyleft though, than in software.

Oh ... (blush) not me ... I got into it definite


searching in Constant 2016


? Both types of texts
are worth considering preserving in libraries. The online environment has
created its own hybrid form between text and library, which is key to
understanding how digital text produces difference.
Historically, we have been treating texts as discrete units, that are distinguished by their
material properties such as cover, binding, script. These characteristics establish them as
either a book, a magazine, a diary, sheet music and so on. One book differs from another,
books differ from magazines, printed matter differs from handwritten manuscripts. Each
volume is a self-contained whole, further distinguished by descriptors such as title, author,
date, publisher, and classification codes that allow it to be located and referred to. The
demarcation of a publication as a container of text works as a frame or boundary which
organises the way it can be located and read. Researching a particular subject matter, the
reader is carried along by classification schemes under which volumes are organised, by
references inside texts, pointing to yet other volumes, and by tables of contents and indexes of
subjects that are appended to texts, pointing to places within that volume.
So while their material properties separate texts into distinct objects, bibliographic information
provides each object with a unique identifier, a unique address in the world of print culture.
Such identifiable objects are further replicated and distributed across containers that we call
libraries, where they can be accessed.
The online environment however, intervenes in this condition. It establishes shortcuts.
Through search engine, digital texts can be searched for any text sequence, regardless of
their distinct materiality and bibliographic specificity. This changes the way they function as a
l


searching in Constant 2018


fab}
13. [[Kruchten]{.fname}, [Philippe]{.gname}: [Agile's Teenage
Crisis?]{.title}, [2011]{.date}. [-\>](#faeebade)]{#edabeeaf}
:::


strator at the department of Computer Science,
KULeuven): \"*It is difficult to answer the question \'what is
software\', but I know what is good software*\"

Thomas Cnudde (hardware designer at ESAT - COSIC, Computer Security and
Industrial Cryptography, KULeuven): \"*Software is a list of sequential
instructions! Hardware for me is made of silicon, software a sequence of
bits in a file. But naturally I am biased: I\'m a hardware designer so I
like to consider it as unique and special*\".

Amal Mahious (Director of NAM-IP, Namur): \"*This, you have to ask the
specialists.*\"

` {.verbatim}
*what is software?
--the unix filesystem says: it's a file----what is a file?
----in the filesystem, if you ask xxd:
------ it's a set of hexadecimal bytes
-------what is hexadecimal bytes?
------ -b it's a set of binary 01s
----if you ask objdump
-------it's a set of instructions
--side channel researching also says:
----it's a set of instructions
--the computer glossary says:
----it's a computer's programs, plus the procedure for their use http://etherbox.local/home/pi/video/A_Computer_Glossary.webm#t=02:26
------ a computer's programs is a set of instrutions for performing computer operations
`

[Remember: To answer the question \"*what is software*\" depends on the
situation, goal, time, and other contextual influences.]{.remember
.descriptor} [TODO: RELATES TO
http://pad.constantvzw.org/p/observatory.guide.everyonescp]{.tmp}
[]{#mzcxodix .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.devmem) FMEM
and /DEV/MEM]{.method .descriptor} [What: Different ways of exploring
your memory (RAM). Because in unix everything is a file, you can access
your memory as if it were a file.]{.what .descriptor} [Urgency: To try
and observe the operational level of software, getting clo


ow
make a list of adjectives and try it for yourself. Level two of this
exercise consists of observing a software application and deducing from
this the values of the individuals, companies, and societies that
produced it.]{.how .descriptor} [Note: A qualifier may narrow down
definitions to undesirable degrees.]{.note .descriptor} [WARNING: This
exercise may be more effective at identifying normative and ideological
assumptions at play in the making, distributing, using, and maintaining
of software than at producing a concise definition.]{.warning
.descriptor} [Example: \"This morning, Jan had difficulties to answer
the question \"what is software\", but he said that he could answer the
question \"what is good software\". What is good software?]{.example
.descriptor} [TODO: RELATES TO]{.tmp} []{#mmmwmje2 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.softwarethrough)
Searching \"software\" through software]{.method .descriptor} [What: A
quick way to sense the ambiguity of the term \'software\', is to go
through the manual files on your hard drive and observe in which cases
is the term used.]{.what .descriptor} [How: command-line oneliner]{.how
.descriptor} [Why: Software is a polymorphic term that take different
meanings and comes with different assumptions for the different agents
involved in its production, usage and all other forms of encounter and
subjection. From the situated point of view of the software present on
your machine, when and why does software call itself as such?]{.why
.descriptor} [Example]{.example .empty .descriptor}

so software exists only outside your computer? only in general terms?
checking for the word software in all man pages:

grep -nr software /usr/local/man
!!!!

software appears only in terms of license:

This


searching in Dekker & Barok 2017




212

LOST AND LIVING (IN) ARCHIVES

initiatives, and individuals. In the early days it was modelled
on Wikipedia (which had been running for two years when
Monoskop started) and contained biographies and descriptions of events from a kind of neutral point of view. Over
the years, the geographic and thematic boundaries have
gradually expanded to embrace the arts and humanities in
their widest sense, focusing primarily on lesser-known
1
phenomena.1 Perhaps the biggest change is the ongoing
See for example
shift from mapping people, events, and places towards
https://monoskop.org/
Features. Accessed
synthesizing discourses.
28 May 2016.
A turning point occurred during my studies at the
Piet Zwart Institute, in the Networked Media programme
from 2010–2012, which combined art, design, software,
and theory with support in the philosophy of open source
and prototyping. While there, I was researching aspects of
the networked condition and how it transforms knowledge,
sociality and economics: I wrote research papers on leaking
as a technique of knowledge production, a critique of the
social graph, and on the libertarian values embedded in the
design of digital currencies. I was ready for more practice.
When Aymeric Mansoux, one of the tutors, encouraged me
to develop my then side-project Monoskop into a graduation
work, the timing was good.
The website got its own domain, a redesign, and most
crucially, the Monoskop wiki was restructured from its
2
focus on media art and culture towards the much wider
https://monoskop.org/
embrace
of the arts and humanities. It turned to a media
Symposium. Accessed
28 May 2016.
library of sorts. The graduation work also consisted of
a symposium about personal collecting and media ar3
chiving,2 which saw its loose follow-ups on media aeshttps://monosko


searching in Dockray 2010


of orientation within the
writing disappear as it loses the historical struc­
ture of the book and becomes pure, continuous
text. For example, page numbers give way to the
more abstract concept of a "location" when the
file is derived from the export as opposed to the
scan, from the text data as opposed to the
physi­cal object. The act of reading in a group is also

100

different ways. An analogy: they are not prints
from the same negative, but entirely different
photographs of the same subject. Our scans are
variations, perhaps competing (if we scanned the
same pages from the same edition), but, more
likely, functioning in parallel.
Gompletists prefer the export, which has a
number of advantages from their perspective:
the whole book is usually kept intact as one unit,
the file; file sizes are smaller because the files are
based more on the text than an image; the file is
found by searching (the Internet) as opposed to
searching through stacks, bookstores, and attics; it
is at least theoretically possible to have every file.
Each file is complete and the same everywhere,
such that there should be no need for variations.
At present, there are important examples of where
variations do occur, notably efforts to improve
metadata, transcode out of proprietary formats,
and to strip DRM restrictions. One imagines an
imminent future where variations proliferate based
on an additive reading— a reader makes highlights,
notations, and marginal arguments and then
re­distributes the file such that someone's
"reading" of a particular text would generate its own public,
the logic of the scan infiltrating the export.

different — "Turn to page 24" is followed by the
sound of a race of collective page flipping, while
"Go to location 2136" leads to finger taps and
caresses on plastic. Factions based on who has the
same edit


searching in USDC 2015


i-Hub may provide that user
access to a copy provided by the Library Genesis Project rather than re-download an additional
copy of the article from ScienceDirect. As a result, Defendants Sci-Hub and Library Genesis
Project act in concert to engage in a scheme designed to facilitate the unauthorized access to and
wholesale distribution of Elsevier’s copyrighted works legitimately available on the
ScienceDirect platform.
The Library Genesis Project’s Unlawful Distribution of Plaintiff’s Copyrighted Works
37.

Access to the Library Genesis Project’s repository is facilitated by the website

“libgen.org,” which provides its users the ability to search, download content from, and upload
content to, the repository. The main page of libgen.org allows its users to perform searches in
various categories, including “LibGen (Sci-Tech),” and “Scientific articles.” In addition to
searching by keyword, users may also search for specific content by various other fields,
including title, author, periodical, publisher, or ISBN or DOI number.
38.

The libgen.org website indicates that the Library Genesis Project repository

contains approximately 1 million “Sci-Tech” documents and 40 million scientific articles. Upon
information and belief, the large majority of these works is subject to copyright protection and is
being distributed through the Library Genesis Project without the permission of the applicable
rights-holder. Upon information and belief, the Library Genesis Project serves primarily, if not

10

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 11 of 16

exclusively, as a scheme to violate the intellectual property rights of the owners of millions of
copyrighted works.
39.

Upon information and belief, Elsevier owns the copyrights in a substantial

number


searching in Ludovico 2013


tion of media (about contemporary art, culture
and politics, with a special focus on eastern Europe) into a common resource,
freely downloadable and regularly updated. It is a remarkably inspired
selection that can be shared regardless of possible copyright restrictions.
_Monoskop_ is an extreme and excellent example of a personal digital library
made public. But any small or big collection can be easily shared. Calibre5 is
an open source software that enables one to efficiently manage a personal
library and to create temporary or stable autonomous zones in which entire
libraries can be shared among a few friends or entire communities.

Marcell Mars,6 a hacktivist and programmer, has worked intensively around this
subject. Together with Tomislav Medak and Vuk Cosic, he organized the HAIP
2012 festival in Ljubljana, where software developers worked collectively on a
complex interface for searching and downloading from major independent online
e-book collections, turning them into a sort of temporary commons. Mars'
observation that, "when everyone is a librarian, the library is everywhere,"
explains the infinite and recursive de-centralization of personal digital
collections and the role of the digital in granting much wider access to
published content.

This access, however, emphasizes the intrinsic fragility of the digital - its
complete dependence on electricity and networks, on the integrity of storage
media and on updated hard and software. Among the few artists to have
conceptually explored this fragility as it affects books is David Guez, whose
work _Humanpédia_7 can be defined as an extravagant type of "time-based art".
The work is clearly inspired by Ray Bradbury's _Fahrenheit 451_ , in which a
small secret community conspires against a total ban on books by memorizing
en


searching in Marczewska, Adema, McDonald & Trettien 2018


as a
pseudoserene horizon and OA as a cultural coastline. One is predictable, static, and
limiting, i.e. designed to satisfy the managerial class of the contemporary university;
the other works towards a poethics of OA, with all its unpredictability, complexity,
and openness. OA publishing which operates within the confines of the pseudoserene
horizon is representative of what happens when we become complacent in the way we
think about the work of publishing. Conversely, OA seen as a dynamic coastline–the
model that Radical Open Access (ROA) collective works to advance–is a space where
publishing is always in process and makes possible a rethinking of the experience of
publishing. Seen as such, ROA is an exposition of the forms of publishing that we
increasingly take for granted, and in doing so mirrors the ethos of poethics. The role
of ROA, then, is to highlight the importance of searching for new models of OA, if
OA is to enact its function as a swerve in attitudes towards knowledge production
and consumption.
But anything new is ugly, Retallack suggests, via Picasso: ‘This is always a by-product
of a truly experimental aesthetics, to move into unaestheticized territory. Definitions
of the beautiful are tied to previous forms’ (Retallack 2003, 28). OA, as it has evolved
in recent years, has not allowed the messiness of the ugly. It has not been messy enough
because it has been co-opted, too quickly and unquestionably, by the agendas of
the contemporary university. OA has become too ‘beautiful’ to enact its disruptive
potential.3 In its drive for legitimisation and recognition, the project of OA has been
motivated by the desire to make this form of publishing too immediately familiar, and

10

Kaja Marczewska

too willingly PDF-able. The consequences of this attitu


searching in Mattern 2014


//placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-4x.jpg)](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-4x.jpg) Hammond, Beeby
and Babka, Harold Washington Library Center, Chicago Public Library. [Photo by
Robert Dawson, from _[Public Library: An American
Commons](https://placesjournal.org/article/public-library-an-american-
commons/)_ ]

## Library as Social Infrastructure

Public libraries are often seen as “opportunity institutions,” opening doors
to, and for, the disenfranchised. 6 People turn to libraries to access the
internet, take a GED class, get help with a resumé or job search, and seek
referrals to other community resources. A [recent
report](http://nycfuture.org/research/publications/branches-of-opportunity) by
the Center for an Urban Future highlighted the benefits to immigrants,
seniors, individuals searching for work, public school students and aspiring
entrepreneurs: “No other institution, public or private, does a better job of
reaching people who have been left behind in today’s economy, have failed to
reach their potential in the city’s public school system or who simply need
help navigating an increasingly complex world.” 7

The new Department of Outreach Services at the Brooklyn Public Library, for
instance, partners with other organizations to bring library resources to
seniors, school children and prison populations. The Queens Public Library
employs case managers who help patrons identify public benefits for which
they’re eligible. “These are all things that someone could dub as social
services,” said Queens Library president Thomas Galante, “but they’re not. … A
public library today has information to improve people’s lives. We are an
enabler; we are a connect


searching in Mattern 2018



tradition](http://www.publicseminar.org/2017/09/the-life-of-the-mind-online/),
too.

Individual scholars – particularly those who enjoy some measure of security –
can model a different pathway and advocate for a more sane, sustainable, and
inclusive publication and review system. Rather than blaming the “bad actors”
for making bad choices and perpetuating a flawed system, let’s instead
incentive the good ones to practice generosity.

In that spirit, I’d like to close by offering a passage I included in my own
promotion dossier, where I justified my choice to prioritize public
scholarship over traditional peer-reviewed venues. I aimed here to make my
values explicit. While I won’t know the outcome of my review for a few months,
and thus I can’t say whether or not this passage successfully served its
rhetorical purpose, I do hope I’ve convincingly argued here that, in
researching media and technology, one should also think critically about the
media one chooses to make that research public. I share this in the hope that
it’ll be useful to others preparing for their own job searches and faculty
reviews, or negotiating their own politics of practice. The passage is below.

* * *

…[A] concern with public knowledge infrastructures has… informed my choice of
venues for publication. Particularly since receiving tenure I’ve become much
more attuned to publication platforms themselves as knowledge infrastructures.
I’ve actively sought out venues whose operational values match the values I
espouse in my research – openness and accessibility (and, equally important,
good design!) – as well as those that The New School embraces through its
commitment to public scholarship and civic engagement. Thus, I’ve steered away
from those peer-reviewed publications th


searching in Medak, Mars & WHW 2015


what has been discovered and as a necessary means for stimulating
new discoveries.
The Book, the Library in which it is preserved,
and the Catalogue which lists it, have seemed for
a long time as if they had achieved their heights of
perfection or at least were so satisfactory that serious
changes need not be contemplated. This may have
been so up to the end of the last century. But for a
score of years great changes have been occurring
before our very eyes. The increasing production of
books and periodicals has revealed the inadequacy of
older methods. The increasing internationalisation
of science has required workers to extend the range
of their bibliographic investigations. As a result, a
movement has occurred in all countries, especially
Germany, the United States and England, for the
expansion and improvement of libraries and for
an increase in their numbers. Publishers have been
searching for new, more flexible, better-illustrated,
and cheaper forms of publication that are better-coordinated with each other. Cataloguing enterprises
on a vast scale have been carried out, such as the
International Catalogue of Scientific Literature and
the Universal Bibliographic Repertory. [2]
Three facts, three ideas, especially merit study
for they represent something really new which in
the future can give us direction in this area. They
are: The Repertory, Classification and the Office of
Documentation.
•••

88

Paul Otlet

2. The Repertory, like the book, has gradually been
increasing in size, and improvements in it suggest
the emergence of something new which will radically modify our traditional ideas.
From the point of view of form, a book can be
defined as a group of pages cut to the same format
and gathered together in such a way as to form a
whole. It was not always so. F


searching in Sollfrank 2018


l, is an institution that collects,
orders, and makes published information available while taking into account
archival, economic, and synoptic aspects. A shadow library does exactly the
same thing, but its mission is not an official one. Usually, the
infrastructure of shadow libraries is conceived, built, and run by a private
initiative, an individual, or a small group of people, who often prefer to
remain anonymous for obvious reasons. In terms of the media content provided,
most shadow libraries are peer-produced in the sense that they are based on
the contributions of a community of supporters, sometimes referred to as
“amateur librarians”. The two key attributes of any proper library, according
to Amsterdam-based media scholar Bodo Balazs, are the catalog and the
community: “The catalogue does not just organize the knowledge stored in the
collection; it is not just a tool of searching and browsing. It is a critical
component in the organisation of the community of librarians who preserve and
nourish the collection.”16 What is specific about shadow libraries, however,
is the fact that they make available anything their contributors consider to
be relevant—regardless of its legal status. That is to say, shadow libraries
also provide unauthorized access to copyrighted publications, and they make
the material available for download without charge and without any other
restrictions. And because there is a whole network of shadow libraries whose
mission is “to remove all barriers in the way of science,”17 experts speak of
an ecosystem fostering free and universal access to knowledge.

The notion of the shadow library enjoyed popularity in the early 2000s when
the wide availability of digital networked media contributed to the emergence
of large-scale repositories of


searching in Stalder 2018


Google
was able to demonstrate the performance capacity of its new programs in
an impressive manner: from a collection of randomly chosen YouTube
videos, analyzed in a cluster by 1,000 computers with 16,000 processors,
it was possible to create a model in just three days that increased
facial recognition in unstructured images by 70
percent.[^95^](#c2-note-0095){#c2-note-0095a} Of course, the algorithm
does not "know" what a face is, but it reliably recognizes a class of
forms that humans refer to as a face. One advantage of a model that is
not created on the basis of prescribed parameters is that it can also
identify faces in non-standard situ­ations (for instance if a person is
in the background, if a face is half-concealed, or if it has been
recorded at a sharp angle). Thanks to this technique, it is possible to
search the content of images directly and not, as before, primarily by
searching their descriptions. Such algorithms are also being used to
identify people in images and to connect them in social networks with
the profiles of the people in question, and this []{#Page_111
type="pagebreak" title="111"}without any cooperation from the users
themselves. Such algorithms are also expected to assist in directly
controlling activity in "unstructured" reality, for instance in
self-driving cars or other autonomous mobile applications that are of
great interest to the military in particular.

Algorithms of this sort can react and adjust themselves directly to
changes in the environment. This feedback, however, also shortens the
timeframe within which they are able to generate repetitive and
therefore predictable results. Thus, algorithms and their predictive
powers can themselves become unpredictable. Stock markets have
frequently experi­enced so-called "sub-second extreme eve


nt to a market-leading position, at the beginning it
was still relatively simple and its mode of operation was at least
partially transparent. It followed the classical statistical model of an
algorithm. A document or site referred to by many links was considered
more important than one to which fewer links
referred.[^104^](#c2-note-0104){#c2-note-0104a} The algorithm analyzed
the given structural order of information and determined the position of
every document therein, and this was largely done independently of the
context of the search and without making any assumptions about it. This
approach functioned relatively well as long as the volume of information
did not exceed a certain size, and as long as the users and their
searches were somewhat similar to one another. In both respects, this is
no longer the case. The amount of information to be pre-sorted is
increasing, and users are searching in all possible situations and
places for everything under the sun. At the time Google was founded, no
one would have thought to check the internet, quickly and while on
one\'s way, for today\'s menu at the restaurant round the corner. Now,
thanks to smartphones, this is an obvious thing to do.
:::

::: {.section}
### Algorithm clouds {#c2-sec-0023}

In order to react to such changes in user behavior -- and simultaneously
to advance it further -- Google\'s search algorithm is constantly being
modified. It has become increasingly complex and has assimilated a
greater amount of contextual []{#Page_115 type="pagebreak"
title="115"}information, which influences the value of a site within
Page­Rank and thus the order of search results. The algorithm is no
longer a fixed object or unchanging recipe but is transforming into a
dynamic process, an opaque cloud composed of multiple interacting
al


gage in activity *z*. It is in this way that Amazon
assembles its book recommendations, for the company knows that, within
the cluster of people that constitutes part of every person\'s profile,
a certain percentage of them have already gone through this sequence of
activity. Or, as the data-mining company Science Rockstars (!) once
pointedly expressed on its website, "Your next activity is a function of
the behavior of others and your own past."

Google and other providers of algorithmically generated orders have been
devoting increased resources to the prognostic capabilities of their
programs in order to make the confusing and potentially time-consuming
step of the search obsolete. The goal is to minimize a rift that comes
to light []{#Page_117 type="pagebreak" title="117"}in the act of
searching, namely that between the world as everyone experiences it --
plagued by uncertainty, for searching implies "not knowing something" --
and the world of algorithmically generated order, in which certainty
prevails, for everything has been well arranged in advance. Ideally,
questions should be answered before they are asked. The first attempt by
Google to eliminate this rift is called Google Now, and its slogan is
"The right information at just the right time." The program, which was
originally developed as an app but has since been made available on
Chrome, Google\'s own web browser, attempts to anticipate, on the basis
of existing data, a user\'s next step, and to provide the necessary
information before it is searched for in order that such steps take
place efficiently. Thus, for instance, it draws upon information from a
user\'s calendar in order to figure out where he or she will have to go
next. On the basis of real-time traffic data, it will then suggest the
optimal way to get the


s, and which can lead to the lowering of the rank in which they
appear in Google\'s general search results
pages."[^119^](#c2-note-0119){#c2-note-0119a} In other words, the
Commission accused the company of manipulating search results to its own
advantage and the disadvantage of users.

This is not the only instance in which the political side of search
algorithms has come under public scrutiny. In the summer of 2012, Google
announced that sites with higher numbers of copyright removal notices
would henceforth appear lower in its
rankings.[^120^](#c2-note-0120){#c2-note-0120a} The company thus
introduced explicitly political and economic criteria in order to
influence what, according to the standards of certain powerful players
(such as film studios), users were able to
view.[^121^](#c2-note-0121){#c2-note-0121a} In this case, too, it would
be possible to speak of the personalization of searching, except that
the heart of the situation was not the natural person of the user but
rather the juridical person of the copyright holder. It was according to
the latter\'s interests and preferences that searching was being
reoriented. Amazon has employed similar tactics. In 2014, the online
merchant changed its celebrated recommendation algorithm with the goal
of reducing the presence of books released by irritating publishers that
dared to enter into price negotiations with the
company.[^122^](#c2-note-0122){#c2-note-0122a}

Controversies over the methods of Amazon or Google, however, are the
exception rather than the rule. Necessary (but never neutral) decisions
about recording and evaluating data []{#Page_121 type="pagebreak"
title="121"}with algorithms are being made almost all the time without
any discussion whatsoever. The logic of the original Page­Rank algorithm
was criticized as early as the year 2000 for essentially representing
the commercial logic of mass media, systematically disadvantaging
less-popular though perhaps otherwise relevant information, and thus
undermining the "substan


n rise
to search engine optimizers, which attempt by various means to optimize
a website\'s evaluation by search engines.

[103](#c2-note-0103a){#c2-note-0103}  Regarding the history of the SCI
and its influence on the early version of Google\'s PageRank, see Katja
Mayer, "Zur Soziometrik der Suchmaschinen: Ein historischer Überblick
der Methodik," in Konrad Becker and Felix Stalder (eds), *Deep Search:
Die Politik des Suchens jenseits von Google* (Innsbruck: Studienverlag,
2009), pp. 64--83.

[104](#c2-note-0104a){#c2-note-0104}  A site with zero links to it could
not be registered by the algorithm at all, for the search engine indexed
the web by having its "crawler" follow the links itself.

[105](#c2-note-0105a){#c2-note-0105}  "Google Algorithm Change History,"
[moz.com](http://moz.com) (2016), online.

[106](#c2-note-0106a){#c2-note-0106}  Martin Feuz et al., "Personal Web
Searching in the Age of Semantic Capitalism: Diagnosing the Mechanisms
of Personalisation," *First Monday* 17 (2011), online.

[107](#c2-note-0107a){#c2-note-0107}  Brian Dean, "Google\'s 200 Ranking
Factors," *Search Engine Journal* (May 31, 2013), online.

[108](#c2-note-0108a){#c2-note-0108}  Thus, it is not only the world of
advertising that motivates the collection of personal information. Such
information is also needed for the development of personalized
algorithms that []{#Page_194 type="pagebreak" title="194"}give order to
the flood of data. It can therefore be assumed that the rampant
collection of personal information will not cease or slow down even if
commercial demands happen to change, for instance to a business model
that is not based on advertising.

[109](#c2-note-0109a){#c2-note-0109}  For a detailed discussion of how
these three levels are recorded, see Felix Stalder and


searching in Stankievech 2016


is increasingly precarious in today’s
shift to a greater base of contract sessional instructors. When
I have been in-between institutions, I lost access to the library
resources upon which my research and scholarship depended.
So, although academic publishing functions in accord with library
acquisitions, there are countless intellectuals—some of whom
are temporary hires or in-between job appointments, others whom
are looking for work, and thus do not have access to libraries.
In this position, I would resort to asking colleagues and friends
to share their access or help me by downloading articles through
their respective institutional portals. Arg.org helps to relieve
this precarity through a shared library which allows scholarship
to continue; Arg.org is thus best described as a community of
readers who share their research and legally-acquired resources
so that when someone is researching a specific topic, the adequate book/essay can be found to fulfill the academic argument.
c. Special circumstances of non-traditional education. Several
years ago, I co-founded the Yukon School of Visual Arts in
Dawson City as a joint venture between an Indigenous government and the State college. Because we were a tiny school,
we did not fit into the typical academic brackets regarding student
population, nor could we access the sliding scale economics
of academic publishers. As a result, even the tiniest package for
a “small” academic institution would be thousands of times larger
than our population and budget. As a result, neither myself
nor my students could access the essential academic resources
required for a post-secondary education. I attempted to solve this
problem by forging partnerships, pulling in favors, and accessing
resources through platforms like Arg.org. It is impo


searching in Tenen & Foxman 2014


At its inception, *Aleph* aggregated several "home-grown" archives,
already in wide circulation in universities and on the gray market.
These included:

-- *KoLXo3*, a collection of scientific texts that was at one time
distributed on 20 DVDs, overlapping with early Gigapedia efforts;\
-- *mexmat*, a library collected by the members of Moscow State
University's Department of Mechanics and Mathematics for internal use,
originally distributed through private FTP servers;\
-- *Homelab*, *Ihtik*, and *Ingsat* libraries;\
-- the Foreign Fiction archive collected from IRC \#\*\*\*
2003.09-2011.07.09 and the Internet Library;\
-- the *Great Science Textbooks* collection and, later, over 20 smaller
miscellaneous archives.^[27](#fn-2025-27){#fnref-2025-27}^

In retrospect, we can categorize the founding efforts along three
parallel tracks: 1) as the development of "front-end" server software
for searching and downloading books, 2) as the organization of an online
forum for enthusiasts willing to contribute to the project, and 3) the
collection effort required to expand and maintain the "back-end" archive
of documents, primarily in .pdf and .djvu
formats.^[28](#fn-2025-28){#fnref-2025-28}^ "What do we do?" writes one
of the early volunteers (in 2009) on the topic of "Outcomes, Goals, and
Scope of the Project." He answers: "we loot sites with ready-made
collections," "sort the indices in arbitrary normalized formats," "for
uncatalogued books we build a 'technical index': name of file, size,
hashcode," "write scripts for database sorting after the initial catalog
process," "search the database," "use the database for the construction
of an accessible catalog," "build torrents for the distribution of files
in the collection."^[29](#fn-2025-29){#fnref-2025-29}^ But, "everything
begins with the


searching in Thylstrup 2019


gitization / Nanna Bonde Thylstrup.

Description: Cambridge, MA : The MIT Press, [2018] | Includes bibliographical
references and index.

Identifiers: LCCN 2018010472 | ISBN 9780262039017 (hardcover : alk. paper)

eISBN 9780262350044

Subjects: LCSH: Library materials--Digitization. | Archival materials--
Digitization. | Copyright and digital preservation.

Classification: LCC Z701.3.D54 T49 2018 | DDC 025.8/4--dc23 LC record
available at


and Matthew Fuller. 9

Another early precursor of mass digitization emerged with Project Gutenberg,
often referred to as the world’s oldest digital library. Project Gutenberg was
the brainchild of author Michael S. Hart, who in 1971, using technologies such
as ARPANET, Bulletin Board Systems (BSS), and Gopher protocols, experimented
with publishing and distributing books in digital form. As Hart reminisced in
his later text, “The History and Philosophy of Project Gutenberg,”10 Project
Gutenberg emerged out of a donation he received as an undergraduate in 1971,
which consisted of $100 million worth of computing time on the Xerox Sigma V
mainframe at the University of Illinois at Urbana-Champaign. Wanting to make
good use of the donation, Hart, in his own words, “announced that the greatest
value created by computers would not be computing, but would be the storage,
retrieval, and searching of what was stored in our libraries.”11 He therefore
committed himself to converting analog cultural works into digital text in a
format not only available to, but also accessible/readable to, almost all
computer systems: “Plain Vanilla ASCII” (ASCII for “American Standard Code for
Information Interchange”). While Project Gutenberg only converted about 50
works into digital text in the 1970s and the 1980s (the first was the
Declaration of Independence), it today hosts up to 56,000 texts in its
distinctly lo-fi manner.12 Interestingly, Michael S. Hart noted very early on
that the intention of the project was never to reproduce authoritative
editions of works for readers—“who cares whether a certain phrase in
Shakespeare has a ‘:’ or a ‘;’ between its clauses”—but rather to “release
etexts that are 99.9% accurate in the eyes of the general reader.”13 As the
pr


henik et al.
refer to here is of course the claims of Erez Aiden and Jean-Baptiste Michel
among others, who promote “culturomics,” that is, the use of huge amounts of
digital information—in this case the corpus of Google Books—to track changes
in language, culture, and history. See Aiden and Michel 2013; and Michel et
al. 2011. 66. Neubert 2008; and Weiss and James 2012, 1–3. 67. I am indebted
to Gayatri Spivak here, who makes this argument about New York in the context
of globalization; see Spivak 2000. 68. In this respect Google mirrors the
glocalization strategies of media companies in general; see Thussu 2007, 19.
69. Although the decisions of foreign legislation of course also affect the
workings of Google, as is clear from the growing body of European regulatory
casework on Google such as the right to be forgotten, competition law, tax,
etc.

# 3
Sovereign Soul Searching: The Politics of Europeana

## Introduction

In 2008, the European Commission launched the European mass digitization
project, Europeana, to great fanfare. Although the EC’s official
communications framed the project as a logical outcome of years of work on
converging European digital library infrastructures, the project was received
in the press as a European counterresponse to Google Books.1 The popular media
framings of Europeana were focused in particular on two narratives: that
Europeana was a public response to Google’s privatization of cultural memory,
and that Europeana was a territorial response to American colonization of
European information and culture. This chapter suggests that while both of
these sentiments were present in Europeana’s early years, the politics of what
Europeana was—and is—paints a more complicated picture. A closer glance at
Europeana’s social,


at in some respects reproduces and accentuates the
existing politics of cultural memory institutions in terms of representation
and ownership, and in other respects gives rise to new forms of cultural
memory politics that part ways with the political regimes of traditional
curatorial apparatuses.

The story of how Europeana’s initial collection was published and later
revised offers a good opportunity to examine its late-sovereign political
dynamics. Europeana launched in 2008, giving access to some 4.5 million
digital objects from more than 1,000 institutions. Shortly after its launch,
however, the site crashed for several hours. The reason given by EU officials
was that Europeana was a victim of its own success: “On the first day of its
launch, Europe’s digital library Europeana was overwhelmed by the interest
shown by millions of users in this new project … thousands of users searching
in the very same second for famous cultural works like the _Mona Lisa_ or
books from Kafka, Cervantes, or James Joyce. … The site was down because of
massive interest, which shows the enormous potential of Europeana for bringing
cultural treasures from Europe’s cultural institutions to the wide public.” 78
The truth, however, lay elsewhere. As a Europeana employee explained, the site
didn’t buckle under the enormous interest shown in it, but rather because
“people were hitting the same things everywhere.” The problem wasn’t so much
the way they were hitting on material, but _what_ they were hitting; the
Europeana employee explained that people’s search terms took the Commission by
surprise, “even hitting things the Commission didn’t want to show. Because
people always search for wrong things. People tend to look at pornographic and
forbidden material such as _Mein Kam


wn territory and on the
global scene.

## Monoskop

In contrast to the broad and distributed infrastructure of lib.ru, other
shadow libraries have emerged as specialized platforms that cater to a
specific community and encourage a specific practice. Monoskop is one such
shadow library. Like lib.ru, Monoskop started as a one-man project and in many
respects still reflects its creator, Dušan Barok, who is an artist, writer,
and cultural activist involved in critical practices in the fields of
software, art, and theory. Prior to Monoskop, his activities were mainly
focused on the Bratislava cultural media scene, and Monoskop was among other
things set up as an infrastructural project, one that would not only offer
content but also function as a form of connectivity that could expand the
networked powers of the practices of which Barok was a part.34 In particular,
Barok was interested in researching the history of media art so that he could
frame the avant-garde media practices in which he engaged in Bratislava within
a wider historical context and thus lend them legitimacy.

### The Shadow Library as a Legal Stratagem

Monoskop was partly motivated by Barok’s own experiences of being barred from
works he deemed of significance to the field in which he was interested. As he
notes, the main impetus to start a blog “came from a friend who had access to
PDFs of books I wanted to read but could not afford go buy as they were not
available in public libraries.”35 Barok thus began to work on Monoskop with a
group of friends in Bratislava, initially hiding it from search engine bots to
create a form of invisibility that obfuscated its existence without, however,
preventing people from finding the Log and uploading new works. Information
about the Log was distributed through mailing l


veries in lab and field.”67 But as Lorraine Daston notes, “discoveries,
especially those made by serendipity, depend partly on luck, and scientists
schooled in probability theory are loathe to ascribe personal merit to the
merely lucky,” and scientists therefore increasingly began to “domesticate
serendipity.”68 Daston remarks that while scientists schooled in probability
were reluctant to ascribe their discoveries to pure chance, the “historians
and literary scholars who struck serendipitous gold in the archives did not
seem so eager to make a science out of their good fortune.”69 One tale of how
literary and historical scholars struck serendipitous gold in the archive is
provided by Mike Featherstone:

> Once in the archive, finding the right material which can be made to speak
may itself be subject to a high degree of contingency—the process not of
deliberate rational searching, but serendipity. In this context it is
interesting to note the methods of innovatory historians such as Norbert Elias
and Michel Foucault, who used the British and French national libraries in
highly unorthodox ways by reading seemingly haphazardly “on the diagonal,”
across the whole range of arts and sciences, centuries and civilizations, so
that the unusual juxtapositions they arrived at summoned up new lines of
thought and possibilities to radically re-think and reclassify received
wisdom. Here we think of the flaneur who wanders the archival textual city in
a half-dreamlike state in order to be open to the half-formed possibilities of
the material and sensitive to unusual juxtapositions and novel perceptions.70

English scholar Nancy Schultz in similar terms notes that the archive “in the
humanities” represents a “prime site for serendipitous discovery.”71 In most
of the


orld Bank Group, Public-Private Partnerships Blog_ , March 29.
50. Buck-Morss, Susan. 2006. “The flaneur, the Sandwichman and the Whore: The Politics of Loitering.” _New German Critique_ (39): 99–140.
51. Budds, Diana. 2016. “Rem Koolhaas: ‘Architecture Has a Serious Problem Today.’” _CoDesign_ 21 (May). .
52. Burkart, Patrick. 2014. _Pirate Politics: The New Information Policy Contests_. Cambridge, MA: MIT Press.
53. Burton, James, and Daisy Tam. 2016. “Towards a Parasitic Ethics.” _Theory, Culture & Society_ 33 (4): 103–125.
54. Busch, Lawrence. 2011. _Standards: Recipes for Reality_. Cambridge, MA: MIT Press.
55. Caley, Seth. 2017. “Digitization for the Masses: Taking Users Beyond Simple Searching in Nineteenth-Century Collections Online.” _Journal of Victorian Culture : JVC_ 22 (2): 248–255.
56. Cadogan, Garnette. 2016. “Walking While Black.” Literary Hub. July 8. .
57. Callon, Michel, Madeleine Akrich, Sophie Dubuisson-Quellier, Catherine Grandclément, Antoine Hennion, Bruno Latour, Alexandre Mallard, et al. 2016. _Sociologie des agencements marchands: Textes choisis_. Paris: Presses des Mines.
58. Cameron, Fiona, and Sarah Kenderdine. 2007. _Theorizing Digital Cultural Heritage: A Critical Discourse_. Cambridge, MA: MIT Press.
59. Canepi, Kitti, Becky Ryder, Michelle Sitko, and Catherine Weng. 2013. _Managing Microforms in the Digital Age_. Association for Library Collections & Technical Services. .
60. Carey, Quinn Ann. 2015, “Maksim Moshkov and lib.ru: R


searching in Weinmayr 2019


ch moral rights
or author rights (droit d’auteur), which are inspired by the humanistic and
individualistic values of the French Revolution and form part of European
copyright law. They conceive the work as an intellectual and creative
expression that is directly connected to its creator. Legal scholar Lionel
Bently observes ‘the prominence of romantic conceptions of authorship’ in the
recognition of moral rights, which are based on concepts of the originality
and authenticity of the modern subject (Lionel Bently, ‘Copyright and the
Death of the Author in Literature and Law’, Modern Law Review, 57 (1994),
973–86 (p. 977)). ‘Authenticity is the pure expression, the expressivity, of
the artist, whose soul is mirrored in the work of art.’ (Cornelia Klinger,
‘Autonomy-Authenticity-Alterity: On the Aesthetic Ideology of Modernity’ in
Modernologies: Contemporary Artists Researching Modernity and Modernism,
exhibition catalogue (Barcelona: Museu d’Art Contemporani de Barcelona, 2009),
pp. 26–28 (p. 29)) Moral rights are the personal rights of authors, which
cannot be surrendered fully to somebody else because they conceptualize
authorship as authentic extension of the subject. They are ‘rights of authors
and artists to be named in relation to the work and to control alterations of
the work.’ (Bently, ‘Copyright and the Death of the Author’, p. 977) In
contrast to copyright, moral rights are granted in perpetuity, and fall to the
estate of an artist after his or her death.

Anglo-American copyright, employed in Prince’s case, on the contrary builds
the concept of intellectual property mainly on economic and distribution
rights, against unauthorised copying, adaptation, distribution and display.
Copyright lasts for a certain amount of time, after whic

 

Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.