Medak, Sekulic & Mertens
Book Scanning and Post-Processing Manual Based on Public Library Overhead Scanner v1.2
2014


PUBLIC LIBRARY
&
MULTIMEDIA INSTITUTE

BOOK SCANNING & POST-PROCESSING MANUAL
BASED ON PUBLIC LIBRARY OVERHEAD SCANNER

Written by:
Tomislav Medak
Dubravka Sekulić
With help of:
An Mertens

Creative Commons Attribution - Share-Alike 3.0 Germany

TABLE OF CONTENTS

Introduction
3
I. Photographing a printed book
7
I. Getting the image files ready for post-processing
11
III. Transformation of source images into .tiffs
13
IV. Optical character recognition
16
V. Creating a finalized e-book file
16
VI. Cataloging and sharing the e-book
16
Quick workflow reference for scanning and post-processing
18
References
22

INTRODUCTION:
BOOK SCANNING - FROM PAPER BOOK TO E-BOOK
Initial considerations when deciding on a scanning setup
Book scanning tends to be a fragile and demanding process. Many factors can go wrong or produce
results of varying quality from book to book or page to page, requiring experience or technical skill
to resolve issues that occur. Cameras can fail to trigger, components to communicate, files can get
corrupted in the transfer, storage card doesn't get purged, focus fails to lock, lighting conditions
change. There are trade-offs between the automation that is prone to instability and the robustness
that is prone to become time consuming.
Your initial choice of book scanning setup will have to take these trade-offs into consideration. If
your scanning community is confined to your hacklab, you won't be risking much if technological
sophistication and integration fails to function smoothly. But if you're aiming at a broad community
of users, with varying levels of technological skill and patience, you want to create as much timesaving automation as possible on the condition of keeping maximum stability. Furthermore, if the
time of individual members of your scanning community can contribute is limited, you might also
want to divide some of the tasks between users and their different skill levels.
This manual breaks down the process of digitization into a general description of steps in the
workflow leading from the printed book to a digital e-book, each of which can be in a concrete
situation addressed in various manners depending on the scanning equipment, software, hacking
skills and user skill level that are available to your book scanning project. Several of those steps can
be handled by a single piece of equipment or software, or you might need to use a number of them your mileage will vary. Therefore, the manual will try to indicate the design choices you have in the
process of planning your workflow and should help you make decisions on what design is best for
you situation.
Introducing book scanner designs
The book scanning starts with the capturing of digital image files on the scanning equipment. There
are three principle types of book scanner designs:
 flatbed scanner
 single camera overhead scanner
 dual camera overhead scanner
Conventional flatbed scanners are widely available. However, given that they require the book to be
spread wide open and pressed down with the platen in order to break the resistance of the book
binding and expose sufficiently the inner margin of the text, it is the most destructive approach for
the book, imprecise and slow.
Therefore, book scanning projects across the globe have taken to custom designing improvised
setups or scanner rigs that are less destructive and better suited for fast turning and capturing of
pages. Designs abound. Most include:




one or two digital photo cameras of lesser or higher quality to capture the pages,
transparent V-shaped glass or Plexiglas platen to press the open book against a V-shape
cradle, and
a light source.

The go-to web resource to help you make an informed decision is the DIY book scanning
community at http://diybookscanner.org. A good place to start is their intro
(http://wiki.diybookscanner.org/ ) and scanner build list (http://wiki.diybookscanner.org/scannerbuild-list ).
The book scanners with a single camera are substantially cheaper, but come with an added difficulty
of de-warping the distorted page images due to the angle that pages are photographed at, which can
sometimes be difficult to correct in the post-processing. Hence, in this introductory chapter we'll
focus on two camera designs where the camera lens stands relatively parallel to the page. However,
with a bit of adaptation these instructions can be used to work with any other setup.
The Public Library scanner
In the focus of this manual is the scanner built for the Public Library project, designed by Voja
Antonić (see Illustration 1). The Public Library scanner was built with the immediate use by a wide
community of users in mind. Hence, the principle consideration in designing the Public Library
scanner was less sophistication and more robustness, facility of use and distributed process of
editing.
The board designs can be found here: http://www.memoryoftheworld.org/blog/2012/10/28/ourbeloved-bookscanner. The current iterations are using two Canon 1100 D cameras with the kit lens
Canon EF-S 18-55mm 1:3.5-5.6 IS. Cameras are auto-charging.

Illustration 1: Public Library Scanner
The scanner operates by automatically lowering the Plexiglas platen, illuminating the page and then
triggering camera shutters. The turning of pages and the adjustments of the V-shaped cradle holding

the book are manual.
The scanner is operated by a two-button controller (see Illustration 2). The upper, smaller button
breaks the capture process in two steps: the first click lowers the platen, increases the light level and
allows you to adjust the book or the cradle, the second click triggers the cameras and lifts the platen.
The lower button has
two modes. A quick
click will execute the
whole capture process in
one go. But if you hold
it pressed longer, it will
lower the platen,
allowing you to adjust
the book and the cradle,
and lift it without
triggering cameras when
you press again.

Illustration 2: A two-button controller

More on this manual: steps in the book scanning process
The book scanning process in general can be broken down in six steps, each of which will be dealt
in a separate chapter in this manual:
I. Photographing a printed book
I. Getting the image files ready for post-processing
III. Transformation of source images into .tiffs
IV. Optical character recognition
V. Creating a finalized e-book file
VI. Cataloging and sharing the e-book
A step by step manual for Public Library scanner
This manual is primarily meant to provide a detailed description and step-by-step instructions for an
actual book scanning setup -- based on the Voja Antonić's scanner design described above. This is a
two-camera overhead scanner, currently equipped with two Canon 1100 D cameras with EF-S 1855mm 1:3.5-5.6 IS kit lens. It can scan books of up to A4 page size.
The post-processing in this setup is based on a semi-automated transfer of files to a GNU/Linux
personal computer and on the use of free software for image editing, optical character recognition
and finalization of an e-book file. It was initially developed for the HAIP festival in Ljubljana in
2011 and perfected later at MaMa in Zagreb and Leuphana University in Lüneburg.
Public Library scanner is characterized by a somewhat less automated yet distributed scanning
process than highly automated and sophisticated scanner hacks developed at various hacklabs. A
brief overview of one such scanner, developed at the Hacker Space Bruxelles, is also included in
this manual.
The Public Library scanning process proceeds thus in following discrete steps:

1. creating digital images of pages of a book,
2. manual transfer of image files to the computer for post-processing,
3. automated renaming of files, ordering of even and odd pages, rotation of images and upload to a
cloud storage,
4. manual transformation of source images into .tiff files in ScanTailor
5. manual optical character recognition and creation of PDF files in gscan2pdf
The detailed description of the Public Library scanning process follows below.
The Bruxelles hacklab scanning process
For purposes of comparison, here we'll briefly reference the scanner built by the Bruxelles hacklab
(http://hackerspace.be/ScanBot). It is a dual camera design too. With some differences in hardware functionality
(Bruxelles scanner has automatic turning of pages, whereas Public Library scanner has manual turning of pages), the
fundamental difference between the two is in the post-processing - the level of automation in the transfer of images
from the cameras and their transformation into PDF or DjVu e-book format.
The Bruxelles scanning process is different in so far as the cameras are operated by a computer and the images are
automatically transferred, ordered and made ready for further post-processing. The scanner is home-brew, but the
process is for advanced DIY'ers. If you want to know more on the design of the scanner, contact Michael Korntheuer at
contact@hackerspace.be.
The scanning and post-processing is automated by a single Python script that does all the work
http://git.constantvzw.org/?
p=algolit.git;a=tree;f=scanbot_brussel;h=81facf5cb106a8e4c2a76c048694a3043b158d62;hb=HEAD
The scanner uses two Canon point and shoot cameras. Both cameras are connected to the PC with USB. They both run
PTP/CHDK (Canon Hack Development Kit). The scanning sequence is the following:
1. Script sends CHDK command line instructions to the cameras
2. Script sorts out the incoming files. This part is tricky. There is no reliable way to make a distinction between the left
and right camera, only between which camera was recognized by USB first. So the protocol is to always power up the
left camera first. See the instructions with the source code.
3. Collect images in a PDF file
4. Run script to OCR a .PDF file to plain .TXT file: http://git.constantvzw.org/?
p=algolit.git;a=blob;f=scanbot_brussel/ocr_pdf.sh;h=2c1f24f9afcce03520304215951c65f58c0b880c;hb=HEAD

I. PHOTOGRAPHING A PRINTED BOOK
Technologically the most demanding part of the scanning process is creating digital images of the
pages of a printed book. It's a process that is very different form scanner design to scanner design,
from camera to camera. Therefore, here we will focus strictly on the process with the Public Library
scanner.
Operating the Public Library scanner
0. Before you start:
Better and more consistent photographs lead to a more optimized and faster post-processing and a
higher quality of the resulting digital e-book. In order to guarantee the quality of images, before you
start it is necessary to set up the cameras properly and prepare the printed book for scanning.
a) Loosening the book
Depending on the type and quality of binding, some books tend to be too resistant to opening fully
to reveal the inner margin under the pressure of the scanner platen. It is thus necessary to “break in”
the book before starting in order to loosen the binding. The best way is to open it as wide as
possible in multiple places in the book. This can be done against the table edge if the book is more
rigid than usual. (Warning – “breaking in” might create irreversible creasing of the spine or lead to
some pages breaking loose.)
b) Switch on the scanner
You start the scanner by pressing the main switch or plugging the power cable into the the scanner.
This will also turn on the overhead LED lights.

c) Setting up the cameras
Place the cameras onto tripods. You need to move the lever on the tripod's head to allow the tripod
plate screwed to the bottom of the camera to slide into its place. Secure the lock by turning the lever
all the way back.
If the automatic chargers for the camera are provided, open the battery lid on the bottom of the
camera and plug the automatic charger. Close the lid.
Switch on the cameras using the lever on the top right side of the camera's body and place it into the
aperture priority (Av) mode on the mode dial above the lever (see Illustration 3). Use the main dial
just above the shutter button on the front side of the camera to set the aperture value to F8.0.

Illustration 3: Mode and main dial, focus mode switch, zoom
and focus ring
On the lens, turn the focus mode switch to manual (MF), turn the large zoom ring to set the value
exactly midway between 24 and 35 mm (see Illustration 3). Try to set both cameras the same.
To focus each camera, open a book on the cradle, lower the platen by holding the big button on the
controller, and turn on the live view on camera LCD by pressing the live view switch (see
Illustration 4). Now press the magnification button twice and use the focus ring on the front of the
lens to get a clear image view.

Illustration 4: Live view switch and magnification button

d) Connecting the cameras
Now connect the cameras to the remote shutter trigger cables that can be found lying on each side
of the scanner. They need to be plugged into a small round port hidden behind a protective rubber
cover on the left side of the cameras.
e) Placing the book into the cradle and double-checking the cameras
Open the book in the middle and place it on the cradle. Hold pressed the large button on the
controller to lower the Plexiglas platen without triggering the cameras. Move the cradle so that the
the platen fits into with the middle of the book.
Turn on the live view on the cameras' LED to see if the the pages fit into the image and if the
cameras are positioned parallel to the page.
f) Double-check storage cards and batteries
It is important that both storage cards on cameras are empty before starting the scanning in order
not to mess up the page sequence when merging photos from the left and the right camera in the
post-processing. To double-check, press play button on cameras and erase if there are some photos
left from the previous scan -- this you do by pressing the menu button, selecting the fifth menu from
the left and then select 'Erase Images' -> 'All images on card' -> 'OK'.
If no automatic chargers are provided, double-check on the information screen that batteries are
charged. They should be fully charged before starting with the scanning of a new book.

g) Turn off the light in the room
Lighting conditions during scanning should be as constant as possible, to reduce glare and achieve
maximum quality remove any source of light that might reflect off the Plexiglas platen. Preferably
turn off the light in the room or isolate the scanner with the black cloth provided.

1. Photographing a book
Now you are ready to start scanning. Place the book closed in the cradle and lower the platen by
holding the large button on the controller pressed (see Illustration 2). Adjust the position of the
cradle and lift the platen by pressing the large button again.
To scan you can now either use the small button on the controller to lower the platen, adjust and
then press it again to trigger the cameras and lift the platen. Or, you can just make a short press on
the large button to do it in one go.
ATTENTION: When the cameras are triggered, the shutter sound has to be heard coming
from both cameras. If one camera is not working, it's best to reconnect both cameras (see
Section 0), make sure the batteries are charged or adapters are connected, erase all images
and restart.
A mistake made in the photographing requires a lot of work in the post-processing, so it's
much quicker to repeat the photographing process.
If you make a mistake while flipping pages, or any other mistake, go back and scan from the page
you missed or incorrectly scanned. Note down the page where the error occurred and in the postprocessing the redundant images will be removed.
ADVICE: The scanner has a digital counter. By turning the dial forward and backward, you
can set it to tell you what page you should be scanning next. This should help you avoid
missing a page due to a distraction.
While scanning, move the cradle a bit to the left from time to time, making sure that the tip of Vshaped platen is aligned with the center of the book and the inner margin is exposed enough.

II. GETTING THE IMAGE FILES READY FOR POST-PROCESSING
Once the book pages have been photographed, they have to be transfered to the computer and
prepared for post-processing. With two-camera scanners, the capturing process will result in two
separate sets of images -- odd and even pages -- coming from the left and right cameras respectively
-- and you will need to rename and reorder them accordingly, rotate them into a vertical position
and collate them into a single sequence of files.
a) Transferring image files
For the transfer of files your principle process design choices are either to copy the files by
removing the memory cards from the cameras and copying them to the computer via a card reader
or to transfer them via a USB cable. The latter process can be automated by remote operating your
cameras from a computer, however this can be done only with a certain number of Canon cameras
(http://bit.ly/16xhJ6b) that can be hacked to run the open Canon Hack Development Kit firmware
(http://chdk.wikia.com).
After transferring the files, you want to erase all the image files on the camera memory card, so that
they would not end up messing up the scan of the next book.
b) Renaming image files
As the left and right camera are typically operated in sync, the photographing process results in two
separate sets of images, with even and odd pages respectively, that have completely different file
names and potentially same time stamps. So before you collate the page images in the order how
they appear in the book, you want to rename the files so that the first image comes from the right
camera, the second from the left camera, the third comes again from the right camera and so on.
You probably want to do a batch renaming, where your right camera files start with n and are offset
by an increment of 2 (e.g. page_0000.jpg, page_0002.jpg,...) and your left camera files start with
n+1 and are also offset by an increment of 2 (e.g. page_0001.jpg, page_0003.jpg,...).
Batch renaming can be completed either from your file manager, in command line or with a number
of GUI applications (e.g. GPrename, rename, cuteRenamer on GNU/Linux).
c) Rotating image files
Before you collate the renamed files, you might want to rotate them. This is a step that can be done
also later in the post-processing (see below), but if you are automating or scripting your steps this is
a practical place to do it. The images leaving your cameras will be positioned horizontally. In order
to position them vertically, the images from the camera on the right will have to be rotated by 90
degrees counter-clockwise, the images from the camera on the left will have to be rotated by 90
degrees clockwise.
Batch rotating can be completed in a number of photo-processing tools, in command line or
dedicated applications (e.g. Fstop, ImageMagick, Nautilust Image Converter on GNU/Linux).
d) Collating images into a single batch
Once you're done with the renaming and rotating of the files, you want to collate them into the same
folder for easier manipulation later.

Getting the image files ready for post-processing on the Public Library scanner
In the case of Public Library scanner, a custom C++ script was written by Mislav Stublić to
facilitate the transfer, renaming, rotating and collating of the images from the two cameras.
The script prompts the user to place into the card reader the memory card from the right camera
first, gives a preview of the first and last four images and provides an entry field to create a subfolder in a local cloud storage folder (path: /home/user/Copy).
It transfers, renames, rotates the files, deletes them from the card and prompts the user to replace the
card with the one from the left camera in order to the transfer the files from there and place them in
the same folder. The script was created for GNU/Linux system and it can be downloaded, together
with its source code, from: https://copy.com/nLSzflBnjoEB
If you have other cameras than Canon, you can edit the line 387 of the source file to change to the
naming convention of your cameras, and recompile by running the following command in your
terminal: "gcc scanflow.c -o scanflow -ludev `pkg-config --cflags --libs gtk+-2.0`"
In the case of Hacker Space Bruxelles scanner, this is handled by the same script that operates the cameras that can be
downloaded from: http://git.constantvzw.org/?
p=algolit.git;a=tree;f=scanbot_brussel;h=81facf5cb106a8e4c2a76c048694a3043b158d62;hb=HEAD

III. TRANSFORMATION OF SOURCE IMAGES INTO .TIFFS
Images transferred from the cameras are high definition full color images. You want your cameras
to shoot at the largest possible .jpg resolution in order for resulting files to have at least 300 dpi (A4
at 300 dpi requires a 9.5 megapixel image). In the post-processing the size of the image files needs
to be reduced down radically, so that several hundred images can be merged into an e-book file of a
tolerable size.
Hence, the first step in the post-processing is to crop the images from cameras only to the content of
the pages. The surroundings around the book that were captured in the photograph and the white
margins of the page will be cropped away, while the printed text will be transformed into black
letters on white background. The illustrations, however, will need to be preserved in their color or
grayscale form, and mixed with the black and white text. What were initially large .jpg files will
now become relatively small .tiff files that are ready for optical character recognition process
(OCR).
These tasks can be completed by a number of software applications. Our manual will focus on one
that can be used across all major operating systems -- ScanTailor. ScanTailor can be downloaded
from: http://scantailor.sourceforge.net/. A more detailed video tutorial of ScanTailor can be found
here: http://vimeo.com/12524529.
ScanTailor: from a photograph of a page to a graphic file ready for OCR
Once you have transferred all the photos from cameras to the computer, renamed and rotated them,
they are ready to be processed in the ScanTailor.
1) Importing photographs to ScanTailor
- start ScanTailor and open ‘new project’
- for ‘input directory’ chose the folder where you stored the transferred and renamed photo images
- you can leave ‘output directory’ as it is, it will place your resulting .tiffs in an 'out' folder inside
the folder where your .jpg images are
- select all files (if you followed the naming convention above, they will be named
‘page_xxxx.jpg’) in the folder where you stored the transferred photo images, and click 'OK'
- in the dialog box ‘Fix DPI’ click on All Pages, and for DPI choose preferably '600x600', click
'Apply', and then 'OK'
2) Editing pages
2.1 Rotating photos/pages
If you've rotated the photo images in the previous step using the scanflow script, skip this step.
- Rotate the first photo counter-clockwise, click Apply and for scope select ‘Every other page’
followed by 'OK'
- Rotate the following photo clockwise, applying the same procedure like in the previous step
2.2 Deleting redundant photographs/pages
- Remove redundant pages (photographs of the empty cradle at the beginning and the end of the
book scanning sequence; book cover pages if you don’t want them in the final scan; duplicate pages
etc.) by right-clicking on a thumbnail of that page in the preview column on the right side, selecting
‘Remove from project’ and confirming by clicking on ‘Remove’.

# If you by accident remove a wrong page, you can re-insert it by right-clicking on a page
before/after the missing page in the sequence, selecting 'insert after/before' (depending on which
page you selected) and choosing the file from the list. Before you finish adding, it is necessary to
again go through the procedure of fixing DPI and Rotating.
2.3 Adding missing pages
- If you notice that some pages are missing, you can recapture them with the camera and insert them
manually at this point using the procedure described above under 2.2.
3) Split pages and deskew
Steps ‘Split pages’ and ‘Deskew’ should work automatically. Run them by clicking the ‘Play’ button
under the 'Select content' function. This will do the three steps automatically: splitting of pages,
deskewing and selection of content. After this you can manually re-adjust splitting of pages and deskewing.
4) Selecting content
Step ‘Select content’ works automatically as well, but it is important to revise the resulting selection
manually page by page to make sure the entire content is selected on each page (including the
header and page number). Where necessary, use your pointer device to adjust the content selection.
If the inner margin is cut, go back to 'Split pages' view and manually adjust the selected split area. If
the page is skewed, go back to 'Deskew' and adjust the skew of the page. After this go back to
'Select content' and readjust the selection if necessary.
This is the step where you do visual control of each page. Make sure all pages are there and
selections are as equal in size as possible.
At the bottom of thumbnail column there is a sort option that can automatically arrange pages by
the height and width of the selected content, making the process of manual selection easier. The
extreme differences in height should be avoided, try to make selected areas as much as possible
equal, particularly in height, across all pages. The exception should be cover and back pages where
we advise to select the full page.
5) Adjusting margins
For best results select in the previous step content of the full cover and back page. Now go to the
'Margins' step and set under Margins section both Top, Bottom, Left and Right to 0.0 and do 'Apply
to...' → 'All pages'.
In Alignment section leave 'Match size with other pages' ticked, choose the central positioning of
the page and do 'Apply to...' → 'All pages'.
6) Outputting the .tiffs
Now go to the 'Output' step. Ignore the 'Output Resolution' section.
Next review two consecutive pages from the middle of the book to see if the scanned text is too
faint or too dark. If the text seems too faint or too dark, use slider Thinner – Thicker to adjust. Do
'Apply to' → 'All pages'.
Next go to the cover page and select under Mode 'Color / Grayscale' and tick on 'White Margins'.
Do the same for the back page.
If there are any pages with illustrations, you can choose the 'Mixed' mode for those pages and then

under the thumb 'Picture Zones' adjust the zones of the illustrations.
Now you are ready to output the files. Just press 'Play' button under 'Output'. Once the computer is
finished processing the images, just do 'File' → 'Save as' and save the project.

IV. OPTICAL CHARACTER RECOGNITION
Before the edited-down graphic files are finalized as an e-book, we want to transform the image of
the text into an actual text that can be searched, highlighted, copied and transformed. That
functionality is provided by Optical Character Recognition. This a technically difficult task dependent on language, script, typeface and quality of print - and there aren't that many OCR tools
that are good at it. There is, however, a relatively good free software solution - Tesseract
(http://code.google.com/p/tesseract-ocr/) - that has solid performance, good language data and can
be trained for an even better performance, although it has its problems. Proprietary solutions (e.g.
Abby FineReader) sometimes provide superior results.
Tesseract supports as input format primarily .tiff files. It produces a plain text file that can be, with
the help of other tools, embedded as a separate layer under the original graphic image of the text in
a PDF file.
With the help of other tools, OCR can be performed also against other input files, such as graphiconly PDF files. This produces inferior results, depending again on the quality of graphic files and
the reproduction of text in them. One such tool is a bashscript to OCR a ODF file that can be found
here: https://github.com/andrecastro0o/ocr/blob/master/ocr.sh
As mentioned in the 'before scanning' section, the quality of the original book will influence the
quality of the scan and thus the quality of the OCR. For a comparison, have a look here:
http://www.paramoulipist.be/?p=1303
Once you have your .txt file, there is still some work to be done. Because OCR has difficulties to
interpret particular elements in the lay-out and fonts, the TXT file comes with a lot of errors.
Recurrent problems are:
- combinations of specific letters in some fonts (it can mistake 'm' for 'n' or 'I' for 'i' etc.);
- headers become part of body text;
- footnotes are placed inside the body text;
- page numbers are not recognized as such.

V. CREATING A FINALIZED E-BOOK FILE
After the optical character recognition has been completed, the resulting text can be merged with
the images of pages and output into an e-book format. While increasingly the proper e-book file
formats such as ePub have been gaining ground, PDFs still remain popular because many people
tend to read on their computers, and they retain the original layout of the book on paper including
the absolute pagination needed for referencing in citations. DjVu is also an option, as an alternative
to PDF, used because of its purported superiority, but it is far less popular.
The export to PDF can be done again with a number of tools. In our case we'll complete the optical
character recognition and PDF export in gscan2pdf. Again, the proprietary Abbyy FineReader will
produce a bit smaller PDFs.
If you prefer to use an e-book format that works better with e-book readers, obviously you will have
to remove some of the elements that appear in the book - headers, footers, footnotes and pagination.

This can be done earlier in the process of cropping down the original .jpg image files (see under III)
or later by transforming the PDF files. This can be done in Calibre (http://calibre-ebook.com) by
converting the PDF into an ePub, where it can be further tweaked to better accommodate or remove
the headers, footers, footnotes and pagination.
Optical character recognition and PDF export in Public Library workflow
Optical character recognition with the Tesseract engine can be performed on GNU/Linux by a
number of command line and GUI tools. Much of those tools exist also for other operating systems.
For the users of the Public Library workflow, we recommend using gscan2pdf application both for
the optical character recognition and the PDF or DjVu export.
To do so, start gscan2pdf and open your .tiff files. To OCR them, go to 'Tools' and select 'OCR'. In
the dialog box select the Tesseract engine and your language. 'Start OCR'. Once the OCR is
finished, export the graphic files and the OCR text to PDF by selecting 'Save as'.
However, given that sometimes the proprietary solutions produce better results, these tasks can also
be done, for instance, on the Abbyy FineReader running on a Windows operating system running
inside the Virtual Box. The prerequisites are that you have both Windows and Abbyy FineReader
you can install in the Virtual Box. If using Virtual Box, once you've got both installed, you need to
designate a shared folder in your Virtual Box and place the .tiff files there. You can now open them
from the Abbyy FineReader running in the Virtual Box, OCR them and export them into a PDF.
To use Abbyy FineReader transfer the output files in your 'out' out folder to the shared folder of the
VirtualBox. Then start the VirtualBox, start Windows image and in Windows start Abbyy
FineReader. Open the files and let the Abbyy FineReader read the files. Once it's done, output the
result into PDF.

VI. CATALOGING AND SHARING THE E-BOOK
Your road from a book on paper to an e-book is complete. If you want to maintain your library you
can use Calibre, a free software tool for e-book library management. You can add the metadata to
your book using the existing catalogues or you can enter metadata manually.
Now you may want to distribute your book. If the work you've digitized is in the public domain
(https://en.wikipedia.org/wiki/Public_domain), you might consider contributing it to the Gutenberg
project
(http://www.gutenberg.org/wiki/Gutenberg:Volunteers'_FAQ#V.1._How_do_I_get_started_as_a_Pr
oject_Gutenberg_volunteer.3F ), Wikibooks (https://en.wikibooks.org/wiki/Help:Contributing ) or
Arhive.org.
If the work is still under copyright, you might explore a number of different options for sharing.

QUICK WORKFLOW REFERENCE FOR SCANNING AND
POST-PROCESSING ON PUBLIC LIBRARY SCANNER
I. PHOTOGRAPHING A PRINTED BOOK
0. Before you start:
- loosen the book binding by opening it wide on several places
- switch on the scanner
- set up the cameras:
- place cameras on tripods and fit them tigthly
- plug in the automatic chargers into the battery slot and close the battery lid
- switch on the cameras
- switch the lens to Manual Focus mode
- switch the cameras to Av mode and set the aperture to 8.0
- turn the zoom ring to set the focal length exactly midway between 24mm and 35mm
- focus by turning on the live view, pressing magnification button twice and adjusting the
focus to get a clear view of the text
- connect the cameras to the scanner by plugging the remote trigger cable to a port behind a
protective rubber cover on the left side of the cameras
- place the book into the crade
- double-check storage cards and batteries
- press the play button on the back of the camera to double-check if there are images on the
camera - if there are, delete all the images from the camera menu
- if using batteries, double-check that batteries are fully charged
- switch off the light in the room that could reflect off the platen and cover the scanner with the
black cloth
1. Photographing
- now you can start scanning either by pressing the smaller button on the controller once to
lower the platen and adjust the book, and then press again to increase the light intensity, trigger the
cameras and lift the platen; or by pressing the large button completing the entire sequence in one
go;
- ATTENTION: Shutter sound should be coming from both cameras - if one camera is not
working, it's best to reconnect both cameras, make sure the batteries are charged or adapters
are connected, erase all images and restart.
- ADVICE: The scanner has a digital counter. By turning the dial forward and backward,
you can set it to tell you what page you should be scanning next. This should help you to
avoid missing a page due to a distraction.

II. Getting the image files ready for post-processing
- after finishing with scanning a book, transfer the files to the post-processing computer
and purge the memory cards
- if transferring the files manually:
- create two separate folders,
- transfer the files from the folders with image files on cards, using a batch
renaming software rename the files from the right camera following the convention
page_0001.jpg, page_0003.jpg, page_0005.jpg... -- and the files from the left camera
following the convention page_0002.jpg, page_0004.jpg, page_0006.jpg...
- collate image files into a single folder
- before ejecting each card, delete all the photo files on the card
- if using the scanflow script:
- start the script on the computer
- place the card from the right camera into the card reader
- enter the name of the destination folder following the convention
"Name_Surname_Title_of_the_Book" and transfer the files
- repeat with the other card
- script will automatically transfer the files, rename, rotate, collate them in proper
order and delete them from the card
III. Transformation of source images into .tiffs
ScanTailor: from a photograph of page to a graphic file ready for OCR
1) Importing photographs to ScanTailor
- start ScanTailor and open ‘new project’
- for ‘input directory’ chose the folder where you stored the transferred photo images
- you can leave ‘output directory’ as it is, it will place your resulting .tiffs in an 'out' folder
inside the folder where your .jpg images are
- select all files (if you followed the naming convention above, they will be named
‘page_xxxx.jpg’) in the folder where you stored the transferred photo images, and click
'OK'
- in the dialog box ‘Fix DPI’ click on All Pages, and for DPI choose preferably '600x600',
click 'Apply', and then 'OK'
2) Editing pages
2.1 Rotating photos/pages
If you've rotated the photo images in the previous step using the scanflow script, skip this step.
- rotate the first photo counter-clockwise, click Apply and for scope select ‘Every other
page’ followed by 'OK'
- rotate the following photo clockwise, applying the same procedure like in the previous
step

2.2 Deleting redundant photographs/pages
- remove redundant pages (photographs of the empty cradle at the beginning and the end;
book cover pages if you don’t want them in the final scan; duplicate pages etc.) by rightclicking on a thumbnail of that page in the preview column on the right, selecting ‘Remove
from project’ and confirming by clicking on ‘Remove’.
# If you by accident remove a wrong page, you can re-insert it by right-clicking on a page
before/after the missing page in the sequence, selecting 'insert after/before' and choosing the file
from the list. Before you finish adding, it is necessary to again go the procedure of fixing DPI and
rotating.
2.3 Adding missing pages
- If you notice that some pages are missing, you can recapture them with the camera and
insert them manually at this point using the procedure described above under 2.2.
3)

Split pages and deskew
- Functions ‘Split Pages’ and ‘Deskew’ should work automatically. Run them by
clicking the ‘Play’ button under the 'Select content' step. This will do the three steps
automatically: splitting of pages, deskewing and selection of content. After this you can
manually re-adjust splitting of pages and de-skewing.

4)

Selecting content and adjusting margins
- Step ‘Select content’ works automatically as well, but it is important to revise the
resulting selection manually page by page to make sure the entire content is selected on
each page (including the header and page number). Where necessary use your pointer device
to adjust the content selection.
- If the inner margin is cut, go back to 'Split pages' view and manually adjust the selected
split area. If the page is skewed, go back to 'Deskew' and adjust the skew of the page. After
this go back to 'Select content' and readjust the selection if necessary.
- This is the step where you do visual control of each page. Make sure all pages are there
and selections are as equal in size as possible.
- At the bottom of thumbnail column there is a sort option that can automatically arrange
pages by the height and width of the selected content, making the process of manual
selection easier. The extreme differences in height should be avoided, try to make
selected areas as much as possible equal, particularly in height, across all pages. The
exception should be cover and back pages where we advise to select the full page.

5) Adjusting margins
- Now go to the 'Margins' step and set under Margins section both Top, Bottom, Left and
Right to 0.0 and do 'Apply to...' → 'All pages'.
- In Alignment section leave 'Match size with other pages' ticked, choose the central

positioning of the page and do 'Apply to...' → 'All pages'.
6) Outputting the .tiffs
- Now go to the 'Output' step.
- Review two consecutive pages from the middle of the book to see if the scanned text is
too faint or too dark. If the text seems too faint or too dark, use slider Thinner – Thicker to
adjust. Do 'Apply to' → 'All pages'.
- Next go to the cover page and select under Mode 'Color / Grayscale' and tick on 'White
Margins'. Do the same for the back page.
- If there are any pages with illustrations, you can choose the 'Mixed' mode for those
pages and then under the thumb 'Picture Zones' adjust the zones of the illustrations.
- To output the files press 'Play' button under 'Output'. Save the project.
IV. Optical character recognition & V. Creating a finalized e-book file
If using all free software:
1) open gscan2pdf (if not already installed on your machine, install gscan2pdf from the
repositories, Tesseract and data for your language from https://code.google.com/p/tesseract-ocr/)
- point gscan2pdf to open your .tiff files
- for Optical Character Recognition, select 'OCR' under the drop down menu 'Tools',
select the Tesseract engine and your language, start the process
- once OCR is finished and to output to a PDF, go under 'File' and select 'Save', edit the
metadata and select the format, save
If using non-free software:
2) open Abbyy FineReader in VirtualBox (note: only Abby FineReader 10 installs and works with some limitations - under GNU/Linux)
- transfer files in the 'out' folder to the folder shared with the VirtualBox
- point it to the readied .tiff files and it will complete the OCR
- save the file

REFERENCES
For more information on the book scanning process in general and making your own book scanner
please visit:
DIY Book Scanner: http://diybookscannnner.org
Hacker Space Bruxelles scanner: http://hackerspace.be/ScanBot
Public Library scanner: http://www.memoryoftheworld.org/blog/2012/10/28/our-belovedbookscanner/
Other scanner builds: http://wiki.diybookscanner.org/scanner-build-list
For more information on automation:
Konrad Voeckel's post-processing script (From Scan to PDF/A):
http://blog.konradvoelkel.de/2013/03/scan-to-pdfa/
Johannes Baiter's automation of scanning to PDF process: http://spreads.readthedocs.org
For more information on applications and tools:
Calibre e-book library management application: http://calibre-ebook.com/
ScanTailor: http://scantailor.sourceforge.net/
gscan2pdf: http://sourceforge.net/projects/gscan2pdf/
Canon Hack Development Kit firmware: http://chdk.wikia.com
Tesseract: http://code.google.com/p/tesseract-ocr/
Python script of Hacker Space Bruxelles scanner: http://git.constantvzw.org/?
p=algolit.git;a=tree;f=scanbot_brussel;h=81facf5cb106a8e4c2a76c048694a3043b158d62;hb=HEA
D


Bodo
A Short History of the Russian Digital Shadow Libraries
2014


Draft Manuscript, 11/4/2014, DO NOT CITE!

A short history of the Russian digital shadow libraries
Balazs Bodo, Institute for Information Law, University of Amsterdam

“What I see as a consequence of the free educational book distribution: in decades generations of people
everywhere in the World will grow with the access to the best explained scientific texts of all times.
[…]The quality and accessibility of education to poors will drastically grow too. Frankly, I'm seeing this as
the only way to naturally improve mankind: by breeding people with all the information given to them at
any time.” – Anonymous admin of Aleph, explaining the reason d’étre of the site

Abstract
RuNet, the Russian segment of the internet is now the home of the most comprehensive scientific pirate
libraries on the net. These sites offer free access to hundreds of thousands of books and millions of
journal articles. In this contribution we try to understand the factors that led to the development of
these sites, and the sociocultural and legal conditions that enable them to operate under hostile legal
and political conditions. Through the reconstruction of the micro-histories of peer produced online text
collections that played a central role in the history of RuNet, we are able to link the formal and informal
support for these sites to the specific conditions developed under the Soviet and post Soviet times.

(pirate) libraries on the net
The digitization and collection of texts was one of the very first activities enabled by computers. Project
Gutenberg, the first in line of digital libraries was established as early as 1971. By the early nineties, a
number of online electronic text archives emerged, all hoping to finally realize the dream that was
chased by humans every since the first library: the collection of everything (Battles, 2004), the Memex
(Bush, 1945), the Mundaneum (Rieusset-Lemarié, 1997), the Library of Babel (Borges, 1998). It did not
take long to realize that the dream was still beyond reach: the information storage and retrieval
technology might have been ready, but copyright law, for the foreseeable future was not. Copyright
protection and enforcement slowly became one of the most crucial issues around digital technologies.

1
Electronic copy available at: http://ssrn.com/abstract=2616631

Draft Manuscript, 11/4/2014, DO NOT CITE!
And as that happened, the texts, which were archived without authorization were purged from the
budding digital collections. Those that survived complete deletion were moved into the dark, locked
down sections of digital libraries that sometimes still lurk behind the law-abiding public façades. Hopes
for a universal digital library can be built was lost in just a few short years as those who tried it (such as
Google or Hathitrust) got bogged down in endless court battles.
There are unauthorized texts collections circulating on channels less susceptible to enforcement, such as
DVDs, torrents, or IRC channels. But the technical conditions of these distribution channels do not enable
the development of a library. Two of the most essential attributes of any proper library: the catalogue
and the community are hard to provide on such channels. The catalog doesn’t just organize the
knowledge stored in the collection; it is not just a tool of searching and browsing. It is a critical
component in the organization of the community of “librarians” who preserve and nourish the
collection. The catalog is what distinguishes an unstructured heap of computer files from a wellmaintained library, but it is the same catalog, which makes shadow libraries, unauthorized texts
collections an easy target of law enforcement. Those few digital online libraries that dare to provide
unauthorized access to texts in an organized manner, such as textz.org, a*.org, monoskop or Gigapedia/
library.nu, all had their bad experiences with law enforcement and rights holder dismay.
Of these pirate libraries, Gigapedia—later called Library.nu—was the largest at the turn of the 2010’s. At
its peak, it was several orders of magnitudes bigger than its peers, offering access to nearly a million
English language documents. It was not just size that made Gigapedia unique. Unlike most sites, it
moved beyond its initial specialization in scientific texts to incorporate a wide range of academic
disciplines. Compared to its peers, it also had a highly developed central metadata database, which
contained bibliographic details on the collection and also, significantly, on gaps in the collection, which
underpinned a process of actively solicited contributions from users. With the ubiquitous
scanner/copiers, the production of book scans was as easy as copying them, thus the collection grew
rapidly.
Gigapedia’s massive catalog made the site popular, which in turn made it a target. In early 2012, a group
of 17 publishers was granted an injunction against the site (now called Library.nu; and against iFile.it—
the hosting site that stored most of Library.nu’s content). Unlike the record and movie companies,
which had collaborated on dozens of lawsuits over the past decade, the Library.nu injunction and lawsuit
were the first coordinated publisher actions against a major file-sharing site, and the first to involve
major university publishers in particular. Under the injunction, the Library.nu adminstrators closed the
site. The collection disappeared and the community around it dispersed. (Liang, 2012)
Gigapedia’s collection was integrated into Aleph’s predominantly Russian language collection before the
shutdown, making Aleph the natural successor of Gigapedia/library.nu.

Libraries in the RuNet

2
Electronic copy available at: http://ssrn.com/abstract=2616631

Draft Manuscript, 11/4/2014, DO NOT CITE!
The search soon zeroed in on a number of sites with strong hints to their Russian origins. Sites like Aleph,
[sc], [fi], [os] are open, completely free to use, and each offers access to a catalog comparable to the late
Gigapedia’s.
The similarity of these seemingly distinct services is no coincidence. These sites constitute a tightly knit
network, in which Aleph occupies the central position. Aleph, as its name suggests, is the source library,
it aims to seed of all scientific digital libraries on the net. Its mission is simple and straightforward. It
collects free-floating scientific texts and other collections from the Internet and consolidates them (both
content and metadata) into a single, open database. Though ordinary users can search the catalog and
retrieve the texts, its main focus is the distribution of the catalog and the collection to anyone who
wants to build services upon them. Aleph has regularly updated links that point to its own, neatly packed
source code, its database dump, and to the terabytes worth of collection. It is a knowledge infrastructure
that can be freely accessed, used and built upon by anyone. This radical openness enables a number of
other pirate libraries to offer Aleph’s catalogue along with books coming from other sources. By
mirroring Aleph they take over tasks that the administrators of Aleph are unprepared or unwilling to do.
Handling much of the actual download traffic they relieve Aleph from the unavoidable investment in
servers and bandwidth, which, in turn puts less pressure on Aleph to engage in commercial activities to
finance its operation. While Aleph stays in the background, the network of mirrors compete for
attention, users and advertising revenue as their design, business model, technical sophistication is finetuned to the profile of their intended target audience.
This strategy of creating an open infrastructure serves Aleph well. It ensures the widespread distribution
of books while it minimizes (legal) exposure. By relinquishing control, Aleph also ensures its own longterm survival, as it is copied again and again. In fact, openness is the core element in the philosophy of
Aleph, which was summed up by one of its administrators as to:
“- collect valuable science/technology/math/medical/humanities academic literature. That is,
collect humanity's valuable knowledge in digital form. Avoid junky books. Ignore "bestsellers".
- build a community of people who share knowledge, improve quality of books, find good and
valuable books, and correct errors.
- share the files freely, spreading the knowledge altruistically, not trying to make money, not
charging money for knowledge. Here people paid money for many books that they considered
valuable and then shared here on [Aleph], for free. […]
This is the true spirit of the [Aleph] project.”

3

Draft Manuscript, 11/4/2014, DO NOT CITE!
Reading, publishing, censorship and libraries in Soviet-Russia
“[T]he library of the Big Lubyanka was unique. In all probability it had been assembled out of confiscated
private libraries. The bibliophiles who had collected those books had already rendered up their souls to
God. But the main thing was that while State Security had been busy censoring and emasculating all the
libraries of the nation for decades, it forgot to dig in its own bosom. Here, in its very den, one could read
Zamyatin, Pilnyak, Panteleimon Romanov, and any volume at all of the complete works of Merezhkovsky.
(Some people wisecracked that they allowed us to read forbidden books because they already regarded
us as dead. But I myself think that the Lubyanka librarians hadn't the faintest concept of what they were
giving us—they were simply lazy and ignorant.)”
(Solzhenitsyn, 1974)
In order to properly understand the factors that shaped Russian pirate librarians’ and their wider
environments’ attitudes towards bottom-up, collaborative, copyright infringing open source digital
librarianship, we need to go back nearly a century and take a close look at the specific social and political
conditions of the Soviet times that shaped the contemporary Russian intelligentsia’s attitudes towards
knowledge.

The communist ideal of a reading nation
Russian culture always had a reverence for the printed word, and the Soviet state, with its Leninist
program of mass education further stressed the idea of the educated, reading public. As Stelmach (1993)
put it:
Reading almost transplanted religion as a sacred activity: in the secularized socialist state, where the
churches were closed, the free press stifled and schools and universities politicized, literature became the
unique source of moral truth for the population. Writers were considered teachers and prophets.
The Soviet Union was a reading culture: in the last days of the USSR, a quarter of the adult population
were considered active readers, and almost everyone else categorized as an occasional reader. Book
prices were low, alternative forms of entertainment were scarce, and people were poor, making reading
one of the most attractive leisure activities.
The communist approach towards intellectual property protection reflected the idea of the reading
nation. The Soviet Union inherited a lax and isolationist copyright system from the tsarist Russia. Neither
the tsarist Russian state nor the Soviet state adhered to international copyright treaties, nor did they
enter into bilateral treaties. Tsarist Russia’s refusal to grant protection to foreign authors and
translations had primarily an economic rationale. The Soviet regime added a strong ideological claim:
granting exclusive ownership to authors was against the interests of the reading public, and “the cultural
development of the masses,” and only served the private interests of authors and heirs.
“If copyright had an economic function, that was only as a right of remuneration for his contribution to
the extension of the socialist art heritage. If copyright had a social role, this was not to protect the author

4

Draft Manuscript, 11/4/2014, DO NOT CITE!
from the economically stronger exploiter, but was one of the instruments to get the author involved in
the great communist educational project.” (Elst, 2005, p 658)
The Soviet copyright system, even in its post-revolutionary phase, maintained two persistent features
that served as important instruments of knowledge dissemination. First, the statutorily granted
“freedom of translation” meant that translation was treated as an exception to copyright, which did not
require rights holder authorization. This measure dismantled a significant barrier to access in a
multicultural and multilingual empire. By the same token, the denial of protection to foreign authors and
rights holders eased the imports of foreign texts (after, of course the appropriate censorship review).
Due to these instruments:
“[s]oon after its founding, the Soviet Union became as well the world's leading literary pirate, not only
publishing in translation the creations of its own citizens but also publishing large numbers of copies of
the works of Western authors both in translation and in the original language.” (Newcity, 1980, p 6.)
Looking simply at the aggregate numbers of published books, the USSR had an impressive publishing
industry on a scale appropriate to a reading nation. Between 1946 and 1970 more than 1 billion copies of
over 26 thousand different work were published, all by foreign authors (Newcity, 1978). In 1976 alone,
more than 1.7 billion copies of 84,304 books were printed. (Friedberg, Watanabe, & Nakamoto, 1984, fn
4.)
Of course these impressive numbers reflected neither a healthy public sphere, nor a well-functioning
print ecology. The book-based public sphere was both heavily censored and plagued by the peculiar
economic conditions of the Soviet, and later the post-Soviet era.

Censorship
The totalitarian Soviet state had many instruments to control the circulation of literary and scientific
works. 1 Some texts never entered official circulation in the first hand: “A particularly harsh
prepublication censorship [affected] foreign literature, primarily in the humanities and socioeconomic
disciplines. Books on politics, international relations, sociology, philosophy, cybernetics, semiotics,
linguistics, and so on were hardly ever published.” (Stelmakh, 2001, p 145.)
Many ‘problematic’ texts were only put into severely limited circulation. Books were released in small
print runs; as in-house publications, or they were only circulated among the trustworthy few. As the
resolution of the Central Committee of the Communist Party of June 4, 1959, stated: “Writings by
bourgeois authors in the fields of philosophy, history, economics, diplomacy, and law […] are to be
published in limited quantities after the excision from them of passages of no scholarly or practical

1

We share Helen Freshwater’s (2003) approach that censorship is a more complex phenomenon than the state just
blocking the circulation of certain texts. Censorship manifested itself in more than one ways and its dominant
modus operandi, institutions, extent, focus, reach, effectiveness showed extreme variations over time. This short
chapter however cannot go into the intricate details of the incredibly rich history of censorship in the Soviet Union.
Instead, through much simplification we try to demonstrate that censorship did not only affect literary works, but
extended deep into scholarly publishing, including natural science disciplines.

5

Draft Manuscript, 11/4/2014, DO NOT CITE!
interest. They are to be supplied with extensive introductions and detailed annotations." (quoted in
Friedberg et al., 1984)
Truncation and mutilation of texts was also frequent. Literary works and texts from humanities and
social sciences were obvious subjects of censorship, but natural sciences and technical fields did not
escape:
“In our film studios we received an American technical journal, something like Cinema, Radio and
Television. I saw it on the chief engineer's desk and noticed that it had been reprinted in Moscow.
Everything undesirable, including advertisements, had been removed, and only those technical articles
with which the engineer could be trusted were retained. Everything else, even whole pages, was missing.
This was done by a photo copying process, but the finished product appeared to be printed.” (Dewhirst &
Farrell, 1973, p. 127)
Mass cultural genres were also subject to censorship and control. Women's fiction, melodrama, comics,
detective stories, and science fiction were completely missing or heavily underrepresented in the mass
market. Instead, “a small group of officially approved authors […] were published in massive editions
every year, [and] blocked readers' access to other literature. […]Soviet literature did not fit the formula
of mass culture and was simply bad literature, but it was issued in huge print-runs.” (Stelmakh, 2001, p.
150)
Libraries were also important instruments of censorship. When not destroyed altogether, censored
works ended up in the spetskhrans, limited access special collections established in libraries to contain
censored works. Besides obvious candidates such as anti-Soviet works and western ‘bourgeois’
publications, many scientific works from the fields of biology, nuclear physics, psychology, sociology,
cybernetics, and genetics ended up in these closed collections (Ryzhak, 2005). Access to the spetskhrans
was limited to those with special permits issued by their employers. “Only university educated readers
were enrolled and only those holding positions of at least junior scientific workers were allowed to read
the publications kept by the spetskhran” (Ryzhak, 2005). In the last years of the USSR, the spetskhran of
the Russian State Library—the largest of them with more than 1 million items in the collection—had 43
seats for its roughly 4500 authorized readers. Yearly circulation was around 200,000 items, a figure that
included “the history and literature of other countries, international relations, science of law, technical
sciences and others.” (Ryzhak, 2005)
Librarians thus played a central role in the censorship machinery. They did more than guard the contents
of limited-access collections and purge the freely accessible stocks according to the latest Party
directives. As the intermediaries between the readers and the closed stacks, their task was to carefully
guide readers’ interests:
“In the 1970s, among the staff members of the service department of the Lenin State Library of the
U.S.S.R., there were specially appointed persons-"politcontrollers"-who, apart from their regular
professional functions, had to perform additional control over the literature lent from the general stocks
(not from the restricted access collections), thus exercising censorship over the percolation of avant-garde

6

Draft Manuscript, 11/4/2014, DO NOT CITE!
aesthetics to the reader, the aesthetics that introduced new ways of thinking and a new outlook on life
and social behavior.” (Stelmakh, 2001)
Librarians also used library cards and lending histories to collect and report information on readers and
suspicious reading habits.
Soviet economic dysfunction also severely limited access to printed works. Acute and chronic shortages
of even censor-approved texts were common, both on the market and in libraries. When the USSR
joined its first first international copyright treaty in its history in 1973 (the UNESCO-backed Universal
Copyright Convention), which granted protection to foreign authors and denied “freedom of
translation,” the access problems only got worse. Soviet concern that granting protection to foreign
authors would result in significant royalty payments to western rightsholders proved valid. By 1976, the
yearly USSR trade deficit in publishing reached a million rubles (~5.5 million current USD) (Levin, 1983, p.
157). This imbalance not only affected the number of publications that were imported into the cashpoor country, but also raised the price of translated works to the double that of Russian-authored books
(Levin, 1983, p. 158).

The literary and scientific underground in Soviet times
Various practices and informal institutions evolved to address the problems of access. Book black
markets flourished: “In the 1970s and 1980s the black market was an active part of society. Buying books
directly from other people was how 35 percent of Soviet adults acquired books for their own homes, and
68 percent of families living in major cities bought books only on the black market.” (Stelmakh, 2001, p
146). Book copying and hoarding was practiced to supplement the shortages:
“People hoarded books: complete works of Pushkin, Tolstoy or Chekhov. You could not buy such things.
So you had the idea that it is very important to hoard books. High-quality literary fiction, high quality
science textbooks and monographs, even biographies of famous people (writers, scientists, composers,
etc.) were difficult to buy. You could not, as far as I remember, just go to a bookstore and buy complete
works of Chekhov. It was published once and sold out and that's it. Dostoyevsky used to be prohibited in
the USSR, so that was even rarer. Lots of writers were prohibited, like Nabokov. Eventually Dostoyevsky
was printed in the USSR, but in very small numbers.
And also there were scientists who wanted scientific books and also could not get them. Mathematics
books, physics - only very few books were published every year, you can't compare this with the market in
the U.S. Russian translations of classical monographs in mathematics were difficult to find.
So, in the USSR, everyone who had a good education shared the idea that hoarding books is very, very
important, and did just that. If someone had free access to a Xerox machine, they were Xeroxing
everything in sight. A friend of mine had entire room full of Xeroxed books.”2
From the 1960s onwards, the ever-growing Samizdat networks tried to counterbalance the effects of
censorship and provide access to both censored classics and information on the current state of Soviet

2

Anonymous source #1

7

Draft Manuscript, 11/4/2014, DO NOT CITE!
society. Reaching a readership of around 200,000, these networks operated in a networked, bottom-up
manner. Each node in the chain of distribution copied the texts it received, and distributed the copies.
The nodes also carried information backwards, towards the authors of the samizdat publications.
In the immediate post-Soviet political turmoil and economic calamity, access to print culture did not get
any easier. Censorship officially ended, but so too did much of the funding for the state-funded
publishing sector. Mass unemployment, falling wages, and the resulting loss of discretionary income did
not facilitate the shift toward market-based publishing models. The funding of libraries also dwindled,
limiting new acquisitions (Elst, 2005, p. 299-300). Economic constraints took the place of political ones.
But in the absence of political repression, self-organizing efforts to address these constraints acquired
greater scope of action. Slowly, the informal sphere began to deliver alternative modes of access to
otherwise hard-to-get literary and scientific works.
Russian pirate libraries emerged from these enmeshed contexts: communist ideologies of the reading
nation and mass education; the censorship of texts; the abused library system; economic hardships and
dysfunctional markets, and, most importantly, the informal practices that ensured the survival of
scholarship and literary traditions under hostile political and economic conditions. The prominent place
of Russian pirate libraries in the larger informal media economy—and of Russian piracy of music, film,
and other copyrighted work more generally—cannot be understood outside this history.

The emergence of DIY digital libraries in RuNet
The copying of censored and uncensored works (by hand, by typewriters, by photocopying or by
computers), the hoarding of copied texts, the buying and selling of books on the black market, and the
informal, peer-to-peer distribution of samizdat material were integral parts of the everyday experience
of much of educated Soviet and post-Soviet readers. The building and maintenance of individual
collections and the participation in the informal networks of exchange offered a sense of political,
economic and cultural agency—especially as the public institutions that supported the core professions
of the intelligentsia fell into sustained economic crisis.
Digital technologies were embraced by these practices as soon as they appeared:
"From late 1970s, when first computers became used in the USSR and printers became available,
people started to print forbidden books, or just books that were difficult to find, not necessarily
forbidden. I have seen myself a print-out on a mainframe computer of a science fiction novel,
printed in all caps! Samizdat was printed on typewriters, xeroxed, printed abroad and xeroxed, or
printed on computers. Only paper circulated, files could not circulate until people started to have
PCs at home. As late as 1992 most people did not have a PC at home. So the only reason to type
a big text into a computer was to print it on paper many times.”3
People who worked in academic and research institutions were well positioned in this process: they had
access to computers, and many had access to the materials locked up in the spetskhrans. Many also had
3

Anonymous source #1

8

Draft Manuscript, 11/4/2014, DO NOT CITE!
the time and professional motivations to collect and share otherwise inaccessible texts. The core of
current digital collections was created in this late-Soviet/early post-Soviet period by such professionals.
Their home academic and scientific institutions continued to play an important role in the development
of digital text collections well into the era of home computing and the internet.
Digitized texts first circulated in printouts and later on optical/magnetic storage media. With the
emergence of digital networking these texts quickly found their way to the early Internet as well. The
first platform for digital text sharing was the Russian Fidonet, a network of BBS systems similar to
Usenet, which enabled the mass distribution of plain text files. The BBS boards, such as the Holy Spirit
BBS’ “SU.SF & F.FANDOM” group whose main focus was Soviet-Russian science fiction and fantasy
literature, connected fans around emerging collections of shared texts. As an anyonmous interviewee
described his experience in the early 1990s…
“Fidonet collected a large number of plaintext files in literature / fiction, mostly in Russian, of course.
Fidonet was almost all typed in by hand. […] Maybe several thousand of the most important books,
novels that "everyone must read" and such stuff. People typed in poetry, smaller prose pieces. I have
myself read a sci-fi novel printed on a mainframe, which was obviously typed in. This novel was by
Strugatski brothers. It was not prohibited or dissident, but just impossible to buy in the stores. These
were culturally important, cult novels, so people typed them in. […] At this point it became clear that
there was a lot of value in having a plaintext file with some novels, and the most popular novels were first
digitized in this way.”
The next stage in the text digitization started around 1994. By that time growing numbers of people had
computers, scanning peripherals, OCR software. Russian internet and PC penetration while extremely
low overall in the 1990s (0.1% of the population having internet access in 1994, growing to 8.3% by
2003), began to make inroads in educational and scientific institutions and among Moscow and
St.Petersburg elites, who were often the critical players in these networks. As access to technologies
increased a much wider array of people began to digitize their favorite texts, and these collections began
to circulate, first via CD-ROMs, later via the internet.
One of such collection belonged to Maxim Moshkov, who published his library under the name lib.ru in
1994. Moshkov was a graduate of the Moscow State University Department of Mechanics and
Mathematics, which played a large role in the digitization of scientific works. After graduation, he started
to work for the Scientific Research Institute of System Development, a computer science institute
associated with the Russian Academy of Sciences. He describes the early days of his collection as follows:
“ I began to collect electronic texts in 1990, on a desktop computer. When I got on the Internet in 1994, I
found lots of sites with texts. It was like a dream came true: there they were, all the desired books. But
these collections were in a dreadful state! Incompatible formats, different encodings, missing content. I
had to spend hours scouring the different sites and directories to find something.
As a result, I decided to convert all the different file-formats into a single one, index the titles of the books
and put them in thematic directories. I organized the files on my work computer. I was the main user of
my collection. I perfected its structure, made a simple, fast and convenient search interface and

9

Draft Manuscript, 11/4/2014, DO NOT CITE!
developed many other useful functions and put it all on the Internet. Soon, people got into the habit of
visiting the site. […]
For about 2 years I have scoured the internet: I sought out and pulled texts from the network, which were
lying there freely accessible. Slowly the library grew, and the audience increased with it. People started
to send books to me, because they were easier to read in my collection. And the time came when I
stopped surfing the internet for books: regular readers are now sending me the books. Day after day I get
about 100 emails, and 10-30 of them contain books. So many books were sent in, that I did not have time
to process them. Authors, translators and publishers also started to send texts. They all needed the
library.”(Мошков, 1999)

In the second half of the 1990’s, the Russian Internet—RuNet—was awash in book digitization projects.
With the advent of scanners, OCR technology, and the Internet, the work of digitization eased
considerably. Texts migrated from print to digital and sometimes back to print again. They circulated
through different collections, which, in turn, merged, fell apart, and re-formed. Digital libraries with the
mission to collect and consolidate these free-floating texts sprung up by the dozens.
Such digital librarianship was the antithesis of official Soviet book culture: it was free, bottom-up,
democratic, and uncensored. It also offered a partial remedy to problems created by the post-Soviet
collapse of the economy: the impoverishment of libraries, readers, and publishers. In this context, book
digitization and collecting also offered a sense of political, economic and cultural agency, with parallels
to the copying and distribution of texts in Soviet times. The capacity to scale up these practices coincided
with the moment when anti-totalitarian social sentiments were the strongest, and economic needs the
direst.
The unprecedented bloom of digital librarianship is the result of the superimposition of multiple waves
of distinct transformations: technological, political, economical and social. “Maksim Moshkov's Library”
was ground zero for this convergence and soon became a central point of exchange for the community
engaged in text digitization and collection:
[At the outset] there were just a couple of people who started scanning books in large quantities. Literally
hundreds of books. Others started proofreading, etc. There was a huge hole in the market for books.
Science fiction, adventure, crime fiction, all of this was hugely in demand by the public. So lib.ru was to a
large part the response, and was filled by those books that people most desired and most valued.
For years, lib.ru integrated as much as it could of the different digital libraries flourishing in the RuNet. By
doing so, it preserved the collections of the many short-lived libraries.
This process of collection slowed in the early 2000’s. By that time, lib.ru had all of the classics, resulting
in a decrease in the flow of new digitized material. By the same token, the Russian book market was
finally starting to offer works aimed at the popular mainstream, and was flooded by cheap romances,
astrology, crime fiction, and other genres. Such texts started to appear in, and would soon flood lib.ru.
Many contributors, including Moshkov, were concerned that such ephemera would dilute the original
10

Draft Manuscript, 11/4/2014, DO NOT CITE!
library. And so they began to disaggregate the collection. Self-published literature, “user generated
content,” and fan fiction was separated into the aptly named samizdat.lib.ru, which housed original texts
submitted by readers. Popular fiction--“low-brow literature”—was copied from the relevant subsections
of lib.ru and split off. Sites specializing in those genres quickly formed their own ecosystem. [L], the first
of its kind, now charges a monthly fee to provide access to the collection. The [f] community split off
from [L] the same way that [L] split off from lib.ru, to provide free and unrestricted access to a
fundamentally similar collection. Finally, some in the community felt the need to focus their efforts on a
separate collection of scientific works. This became Kolhoz collection.

The genesis of a million book scientific library
A Kolhoz (Russian: колхо́ з) was one of the types of collective farm that emerged in the early Soviet
period. In the early days, it was a self-governing, community-owned collaborative enterprise, with many
of the features of a commons. For the Russian digital librarians, these historical resonances were
intentional.
The kolhoz group was initially a community that scanned and processed scientific materials: books and,
occasionally, articles. The ethos was free sharing. Academic institutes in Russia were in dire need of
scientific texts; they xeroxed and scanned whatever they could. Usually, the files were then stored on the
institute's ftp site and could be downloaded freely. There were at least three major research institutes
that did this, back in early 2000s, unconnected to each other in any way, located in various faraway parts
of Russia. Most of these scans were appropriated by the kolhoz group and processed into DJVU4.
The sources of files for kolhoz were, initially, several collections from academic institutes (downloaded
whenever the ftp servers were open for anonymous access; in one case, from one of the institutes of the
Chinese academy of sciences, but mostly from Russian academic institutes). At that time (around 2002),
there were also several commercialized collections of scanned books on sale in Russia (mostly, these were
college-level textbooks on math and physics); these files were also all copied to kolhoz and processed into
DJVU. The focus was on collecting the most important science textbooks and monographs of all time, in
all fields of natural science.
There was never any commercial support. The kolhoz group never had a web site with a database, like
most projects today. They had an ftp server with files, and the access to ftp was given by PM in a forum.
This ftp server was privately supported by one of the members (who was an academic researcher, like
most kolhoz members). The files were distributed directly by burning files on writable DVDs and giving the

4

DJVU is a file format that revolutionized online book distribution the way mp3 revolutionized the online music
distribution. For books that contain graphs, images and mathematical formulae scanning is the only digitization
option. However, the large number of resulting image files is difficult to handle. The DJVU file format allows for the
images of scanned book pages to be stored in the smallest possible file size, which makes it the perfect medium for
the distribution of scanned e-books.

11

Draft Manuscript, 11/4/2014, DO NOT CITE!
DVDs away. Later, the ftp access was closed to the public, and only a temporary file-swapping ftp server
remained. Today the kolhoz DVD releases are mostly spread via torrents.” 5
Kolhoz amassed around fifty thousand documents, the mexmat collection of the Moscow State
University Department of Mechanics and Mathematics (Moshkov’s alma mater) was around the same
size, the “world of books” collection (mirknig) had around thirty thousand files, and there were around a
dozen other smaller archives, each with approximately 10 thousand files in their respective collections.
The Kolhoz group dominated the science-minded ebook community in Russia well into the late 2000’s.
Kolhoz, however, suffered from the same problems as the early Fidonet-based text collections. Since it
was distributed in DVDs, via ftp servers and on torrents, it was hard to search, it lacked a proper catalog
and it was prone to fragmentation. Parallel solutions soon emerged: around 2006-7, an existing book site
called Gigapedia copied the English books from Kolhoz, set up a catalog, and soon became the most
influential pirate library in the English speaking internet.
Similar cataloguing efforts soon emerged elsewhere. In 2007, someone on rutracker.ru, a Russian BBS
focusing on file sharing, posted torrent links to 91 DVDs containing science and technology titles
aggregated from various other Russian sources, including Kolhoz. This massive collection had no
categorization or particular order. But it soon attracted an archivist: a user of the forum started the
laborious task of organizing the texts into a usable, searchable format—first filtering duplicates and
organizing existing metadata first into an excel spreadsheet, and later moving to a more open, webbased database operating under the name Aleph.
Aleph inherited more than just books from Kolhoz and Moshkov’s lib.ru. It inherited their elitism with
regard to canonical texts, and their understanding of librarianship as a community effort. Like the earlier
sites, Aleph’s collections are complemented by a stream of user submissions. Like the other sites, the
number of submissions grew rapidly as the site’s visibility, reputation and trustworthiness was
established, and like the others it later fell, as more and more of what was perceived as canonical
literature was uploaded:
“The number of mankind’s useful books is about what we already have. So growth is defined by newly
scanned or issued books. Also, the quality of the collection is represented not by the number of books but
by the amount of knowledge it contains. [ALEPH] does not need to grow more and I am not the only one
among us who thinks so. […]
We have absolutely no idea who sends books in. It is practically impossible to know, because there are a
million books. We gather huge collections which eliminate any traces of the original uploaders.
My expectation is that new arrivals will dry up. Not completely, as I described above, some books will
always be scanned or rescanned (it nowadays happens quite surprisingly often) and the overall process of
digitization cannot and should not be stopped. It is also hard to say when the slowdown will occur: I
expected it about a year ago, but then library.nu got shut down and things changed dramatically in many
respects. Now we are "in charge" (we had been the largest anyways, just now everyone thinks we are in
5

Anonymous source #1

12

Draft Manuscript, 11/4/2014, DO NOT CITE!
charge) and there has been a temporary rise in the book inflow. At the moment, relatively small or
previously unseen collections are being integrated into [ALEPH]. Perhaps in a year it will saturate.
However, intuition is not a good guide. There are dynamic processes responsible for eBook availability. If
publishers massively digitize old books, they'll obviously be harvested and that will change the whole
picture.” 6
Aleph’s ambitions to create a universal library are limited , at least in terms of scope. It does not want to
have everything, or anything. What it wants is what is thought to be relevant by the community,
measured by the act of actively digitizing and sharing books. But it has created a very interesting strategy
to establish a library which is universal in terms of its reach. The administrators of Aleph understand that
Gigapedia’s downfall was due to its visibility and they wish to avoid that trap:
“Well, our policy, which I control as strictly as I can, is to avoid fame. Gigapedia's policy was to gain as
much fame as possible. Books should be available to you, if you need them. But let the rest of the world
stay in its equilibrium. We are taking great care to hide ourselves and it pays off.”7
They have solved the dilemma of providing access without jeopardizing their mission by open sourcing
the collection and thus allowing others to create widely publicized services that interface with the
public.They let others run the risk of getting famous.

Mirrors and communities
Aleph serves as a source archive for around a half-dozen freely accessible pirate libraries on the net. The
catalog database is downloadable, the content is downloadable, even the server code is downloadable.
No passwords are required to download and there are no gatekeepers. There are no obstacle to setting
up a similar library with a wider catalog, with improved user interface and better services, with a
different audience or, in fact, a different business model.
This arrangement creates a two-layered community. The core group of the Aleph admins maintains the
current service, while a loose and ever changing network of ‘mirror sites’ build on the Aleph
infrastructure.
“The unspoken agreement is that the mirrors support our ideas. Otherwise we simply do not interact with
them. If the mirrors do support this, they appear in the discussions, on the Web etc. in a positive context.
This is again about building a reputation: if they are reliable, we help with what we can, otherwise they
should prove the World they are good on their own. We do not request anything from them. They are free
to do anything they like. But if they do what we do not agree with, it'll be taken into account in future
relations. If you think for a while, there is no other democratic way of regulation: everyone expresses his
own views and if they conform with ours, we support them. If the ideology does not match, it breaks
down.”8

6

Anonymous source #1
Anonymous source #2
8
Anonymous source #1
7

13

Draft Manuscript, 11/4/2014, DO NOT CITE!
The core Aleph team claims to exclusively control only two critical resources: the BBS that is the home of
the community, and the book-uploading interface. That claim is, however, not entirely accurate. For the
time being, the academic minded e-book community indeed gathers on the BBS managed by Aleph, and
though there is little incentive to move on, technically nothing stands in the way of alternatives to spring
up. As for the centralization of the book collection: many of the mirrors have their own upload pages
where one can contribute to a mirror’s collection, and it is not clear how or whether books that land at
one of the mirrors find their way back to the central database. Aleph also offers a desktop library
management tool, which enables dedicated librarians to see the latest Aleph database on their desktop
and integrate their local collections with the central database via this application. Nevertheless, it seems
that nothing really stands in the way of the fragmentation of the collection, apart from the willingness of
uploaders to contribute directly to Aleph rather than to one of its mirrors (or other sites).
Funding for Aleph comes from the administrators’ personal resources as well as occasional donations
when there is a need to buy or rent equipment or services:
“[W]e've been asking and getting support for this purpose for years. […] All our mirrors are supported
primarily from private pockets and inefficient donation schemes: they bring nothing unless a whole
campaign is arranged. I asked the community for donations 3 or 4 times, for a specific purpose only and
with all the budget spoken for. And after getting the requested amount of money we shut down the
donations.”9
Mirrors, however, do not need to be non-commercial to enjoy the support of the core Aleph community,
they just have to provide free access. Ad-supported business models that do not charge for individual
access are still acceptable to the community, but there has been serious fallout with another site, which
used the Aleph stock to seed its own library, but decided to follow a “collaborative piracy” business
approach.
“To make it utmost clear: we collaborate with anyone who shares the ideology of free knowledge
distribution. No conditions. [But] we can't suddenly start supporting projects that earn money. […]
Moreover, we've been tricked by commercial projects in the past when they used the support of our
community for their own benefit.”10
The site in question, [e], is based on a simple idea: If a user cannot find a book in its collection, the
administrators offer to purchase a digital or print copy, rip it, and sell it to the user for a fraction of the
original price—typically under $1. Payments are to be made in Amazon gift cards which make the
purchases easy but the de-anonymization of users difficult. [e] recoups its investment, in principle,
through resale. While clearly illegal, the logic is not that different from that of private subscription
libraries, which purchase a resource and distribute the costs and benefits among club members.

9

BBS comment posted on Jan 15, 2013
BBS comment posted on Jan 15, 2013

10

14

Draft Manuscript, 11/4/2014, DO NOT CITE!
Although from the rights holders’ perspective there is little difference between the two approaches,
many participants in the free access community draw a sharp line between the two, viewing the sales
model as a violation of community norms.
“[e] is a scam. They were banned in our forum. Yes, most of the books in [e] came from [ALEPH], because
[ALEPH] is open, but we have nothing to do with them... If you wish to buy a book, do it from legal
sources. Otherwise it must be free.[…]
What [e] wants:
- make money on ebook downloads, no matter what kind of ebooks.
- get books from all the easy sources - spend as little effort as possible on books - maximize profit.
- no need to build a community, no need to improve quality, no need to correct any errors - just put all
files in a big pile - maximize profit.
- files are kept in secret, never given away, there is no listing of files, there is no information about what
books are really there or what is being done.
There are very few similarities in common between [e]and [ALEPH], and these similarities are too
superficial to serve as a common ground for communication. […]
They run an illegal business, making a profit.”11
Aleph administrators describe a set of values that differentiates possible site models. They prioritize the
curatorial mission and the provision of long term free access to the collection with all the costs such a
position implies, such as open sourcing the collection, ignoring takedown requests, keeping a low profile,
refraining from commercial activities, and as a result, operating on a reduced budget . [e] prioritizes the
expansion of its catalogue on demand but that implies a commercial operation, a larger budget and the
associated high legal risk. Sites carrying Aleph’s catalogue prioritize public visibility, carry ads to cover
costs but respond to takedown requests to avoid as much trouble as they can. From the perspective of
expanding access, these are not easy or straightforward tradeoffs. In Aleph’s case, the strong
commitment to the mission of providing free access comes with significant sacrifices, the most important
of which is relinquishing control over its most valuable asset: its collection of 1.2 million scientific books.
But they believe that these costs are justified by the promise, that this way the fate of free access is not
tied to the fate of Aleph.
The fact that piratical file sharing communities are willing to make substantial sacrifices (in terms of selfrestraint) to ensure their long term survival has been documented in a number of different cases. (Bodó,
2013) Aleph is unique, however in its radical open source approach. No other piratical community has
given up all the control over itself entirely. This approach is rooted in the way how it regards the legal
status of its subject matter, i.e. scholarly publications in the first place. While norms of openness in the
field of scientific knowledge production were first formed in the Enlightenment period, Aleph’s
11

BBS comments posted on Jul 02, 2013, and Aug 25, 2013

15

Draft Manuscript, 11/4/2014, DO NOT CITE!
copynorms are as much shaped by the specificities of post-Soviet era as by the age old realization that in
science we can see further if we are allowed “standing on the shoulders of giants”.

Copyright and copynorms around Russian pirate libraries
The struggle to re-establish rightsholders’ control over digitized copyrighted works has defined the
copyright policy arena since Napster emerged in 1999. Russia brought a unique history to this conflict. In
Russia, digital libraries and their emerged in a period a double transformation: the post-Soviet copyright
system had to adopt global norms, while the global norms struggled to adapt to the emergence of digital
copying.
The first post-Soviet decade produced new copyright laws that conformed with some of the international
norms advocated by Western rightsholders, but little legal clarity or enforceability (Sezneva & Karaganis,
2011). Under such conditions, informally negotiated copynorms set in to fill the void of non-existent,
unreasonable, or unenforceable laws. The pirate libraries in the RuNet are as much regulated by such
norms as by the actual laws themselves.
During most of the 1990’s user-driven digitization and archiving was legal, or to be more exact, wasn’t
illegal. The first Russian copyright law, enacted in 1993, did not cover “internet rights” until a 2006
amendment (Budylin & Osipova, 2007; Elst, 2005, p. 425). As a result, many argued (including the
Moscow prosecutor’s office), that the distribution of copyrighted works via the internet was not
copyright infringement. Authors and publishers, who saw their works appear in digital form, and
circulated via CD-ROMs and the internet, had to rely on informal norms, still in development, to establish
control over their texts vis-à-vis enthusiastic collectors and for-profit entrepreneurs.
The HARRYFAN CD was one of the early examples of a digital text collection in circulation before internet
access was widespread. The CD contained around ten thousand texts, mostly Russian science fiction. It
was compiled in 1997 by Igor Zagumenov, a book enthusiast, from the texts that circulated on the Holy
Spirit BBS. The CD was a non-profit project, planned to be printed and sold in around 1000 copies.
Zagumenov did get in touch with some of the authors and publishers, and got permission to release
some of their texts, but the CD also included many other works that were uploaded to the BBS without
authorization. The CD included the following copyright notice, alongside the name and contact of
Zagumenov and those who granted permission:
Texts on this CD are distributed in electronic format with the consent of the copyright holders or their
literary agent. The disk is aimed at authors, editors, translators and fans SF & F as a compact reference
and information library. Copying or reproduction of this disc is not allowed. For the commercial use of
texts please refer directly to the copyright owners at the following addresses.
The authors whose texts and unpublished manuscripts appeared in the collection without authorization
started to complain to those whose contact details were in the copyright notice. Some complained
about the material damage the collection may have caused to them, but most complaints focused on
moral rights: unauthorized publication of a manuscript, the mutilation of published works, lack of
attribution, or the removal of original copyright and contact notices. Some authors had no problem
16

Draft Manuscript, 11/4/2014, DO NOT CITE!
appearing in non-commercially distributed collections but objected to the fact that the CDs were sold
(and later overproduced in spite of Zagumenov’s intentions).
The debate, which took place in the book-related fora of Fidonet, had some important points.
Participants again drew a significant distinction between free access provided first by Fidonet (and later
by lib.ru, which integrated some parts of the collection) and what was perceived as Zagumenov’s forprofit enterprise—despite the fact that the price of the CD only covered printing costs. The debate also
drew authors’ and publishers’ attention to the digital book communities’ actions, which many saw as
beneficial as long as it respected the wishes of the authors. Some authors did not want to appear online
at all, others wanted only their published works to be circulated.
Lib.ru of course integrated the parts of the HARRYFAN CD into its collection. Moshkov’s policy towards
authors’ rights was to ask for permission, if he could contact the author or publisher. He also honored
takedown requests sent to him. In 1999 he wrote on copyright issues as follows:
The author’s interests must be protected on the Internet: the opportunity to find the original copy, the
right of attribution, protection from distorting the work. Anyone who wants to protect his/her rights,
should be ready to address these problems, ranging from the ability to identify the offending party, to the
possibility of proving infringement.[…]
Meanwhile, it has become a stressing question how to protect authors-netizens' rights regarding their
work published on the Internet. It is known that there are a number of periodicals that reprint material
from the Internet without the permission of the author, without payment of a fee, without prior
arrangement. Such offenders need to be shamed via public outreach. The "Wall of shame" website is one
of the positive examples of effective instruments established by the networked public to protect their
rights. It manages to do the job without bringing legal action - polite warnings, an indication of potential
trouble and shaming of the infringer.
Do we need any laws for digital libraries? Probably we do, but until then we have to do without. Yes, of
course, it would be nice to have their status established as “cultural objects” and have the same rights as
a "real library" to collect information, but that might be in the distant future. It would also be nice to
have the e-library "legal deposits" of publications in electronic form, but when even Leninka [the Russian
State Library] cannot always afford that, what we really need are enthusiastic networkers. […]
The policy of the library is to take everything they give, otherwise they cease to send books. It is also to
listen to the authors and strictly comply with their requirements. And it is to grow and prosper. […] I
simply want the books to find their readers because I am afraid to live in a world where no one reads
books. This is already the case in America, and it is speeding up with us. I don’t just want to derail this
process, I would like to turn it around.”

17

Draft Manuscript, 11/4/2014, DO NOT CITE!
Moshkov played a crucial role in consolidating copynorms in the Russian digital publishing domain. His
reputation and place in the Russian literary domain is marked by a number of prizes12, and the library’s
continued existence. This place was secured by a number of closely intertwined factors:







Framing and anchoring the digitization and distribution practice in the library tradition.
The non-profit status of the enterprise.
Respecting the wishes of the rights holders even if he was not legally obliged to do so.
Maintaining active communication with the different stakeholders in the community,
including authors and readers.
Responding to a clear gap in affordable, legal access.
Conservatism with regard to the book, anchored in the argument that digital texts are not
substitutes for printed matter.

Many other digital libraries tried to follow Moshkov’s formula, but the times were changing. Internet and
computer access left the sub-cultural niches and became mainstream; commercialization became a
viable option and thus an issue for both the community and rightsholders; and the legal environment
was about to change.

Formalization of the IP regime in the 2000s
As soon as the 1993 copyright law passed, the US resumed pressure on the Russian government for
further reform. Throughout the period—and indeed to the present day—US Trade Representative
Special 301 reports cited inadequate protections and lack of enforcement of copyright. Russia’s plans to
join the WTO, over which the US had effective veto power, also became leverage to bring the Russian
copyright regime into compliance with US norms.
Book piracy was regularly mentioned in Special 301 reports in the 2000s, but the details, alleged losses,
and analysis changed little from year to year. The estimated $40M USD losses per year throughout this
period were dwarfed by claims from the studios and software vendors, and clearly were not among the
top priorities of the USTR. For most of the decade, the electronic availability of bestsellers and academic
textbooks was seen in the context of print substitution, rather than damage to the non-existent
electronic market. And though there is little direct indication, the Special 301 reports name sites which
(unlike lib.ru) were serving audiences beyond the RuNet, indicating that the focus of enforcement was
not to protect US interests in the Russian market, but to prevent sites based in Russia to cater for
demand in the high value Western-European and US markets.
A 1998 amendment to the 1993 copyright law extended the legal framework to encompass digital rights,
though in a fashion that continued to produce controversy. After 1998, digital services had to license
content from collecting societies, but those societies needed no permission from rightsholders provided
they paid royalites. The result was a proliferation of collective management organizations, competing to
license the material to digital services (Sezneva and Karaganis, 2011), which under this arrangement
12

ROTOR, the International Union of Internet Professionals in Russia voted lib.ru as the “literary site of the year” in
1999,2001 and 2003, “electronic library of the year” in 2004,2006,2008,2009, and 2010, “programmer of the year”
in 1999, and “man of the year” in 2004 and 2005.

18

Draft Manuscript, 11/4/2014, DO NOT CITE!
were compliant with Russian law, but were regarded as illegal by Western rights holders who claimed
that the Russian collecting societies were not representing them.
The best known of dispute from this time was the one around the legality of Allofmp3.com, a site that
sold music from western record labels at prices far below those iTunes or other officially licensed
vendors. AllofMP3.com claimed that it was licensed by ROMS, the Russian Society for Multimedia and
Internet (Российское общество по мультимедиа и цифровым сетям (НП РОМС)), but despite of that
became the focal point of US (and behind them, major label) pressure, leading to an unsuccessful
criminal prosecution of the site owner and eventual closure of the site in 2007. Although Lib.ru had
some direct agreements with authors, it also licensed much of its collection from ROMS, and thus was in
the same legal situation as AllofMP3.com. .
Lib.ru avoided the attention of foreign rightholders and Russian state pressure and even benefited from
state support during the period, the receiving a $30,000 grant from the Federal Agency for Press and
Mass Communications to digitize the most important works from the 1930’s. But the chaotic licensing
environment that governed their legal status also came back to haunt them. In 2005, a lawsuit was
brought against Moshkov by KM Online (KMO), an online vendor that sold digital texts for a small fee.
Although the KMO collection—like every other collection—had been assembled from a wide range of
sources on the Internet, KMO claimed to pay a 20% royalty on its income to authors. In 2004 KMO
requested that lib.ru take down works by several authors with whom (or with whose heirs) KMO claimed
to be in exclusive contract to distribute their texts online. KMO’s claims turned out to be only partly true.
KMO had arranged contracts with a number of the heirs to classics of the Soviet period, who hoped to
benefit from an obscure provision in the 1993 Russian copyright law that granted copyrights to the heirs
of politically prosecuted and later rehabilitated Soviet-era authors. Moshkov, in turn, claimed that he
had written or oral agreements with many of the same authors and heirs, in addition to his agreement
with ROMS.
The lawsuit was a true public event. It generated thousands of news items both online and in the
mainstream press. Authors, members of the publishing industry, legal professionals, librarians, internet
professionals publicly supported Moshkov, while KMO was seen as a rogue operator that would lie to
make easy money on freely-available digital resources.
Eventually, the court ruled that KMO indeed had one exclusive contract with Eduard Gevorgyan, and that
the publication of his texts by Moshkov infringed the moral (but not the economic) rights of the author.
Moshkov was ordered to pay 3000 Rubles (approximately $100) in compensation.
The lawsuit was a sign of a slow but significant transformation in the Russian print ecosystem. The idea
of a viable market for electronic books began to find a foothold. Electronic versions of texts began to be
regarded as potential substitutes for the printed versions, not advertisements for them or supplements
to them. More and more commercial services emerged, which regard the well-entrenched free digital
libraries as competitors. As Russia continued to bring its laws into closer conformance with WTO
requirements, ahead of Russia’s admission in 2012, western rightsholders gained enough power to
demand enforcement against RuNet pirate sites. The kinds of selective enforcement for political or

19

Draft Manuscript, 11/4/2014, DO NOT CITE!
business purposes, which had marked the Russian IP regime throughout the decade (Sezneva &
Karaganis, 2011), slowly gave way to more uniform enforcement.

Closure of the Legal Regime
The legal, economic, and cultural conditions under which Aleph and its mirrors operate today are very
different from those of two decades earlier. The major legal loopholes are now closed, though Russian
authorities have shown little inclination to pursue Aleph so far:
I can't say whether it's the Russian copyright enforcement or the Western one that's most dangerous for
Aleph; I'd say that Russian enforcement is still likely to tolerate most of the things that Western
publishers won't allow. For example, lib.ru and [L] and other unofficial Russian e-libraries are tolerated
even though far from compliant with the law. These kinds of e-libraries could not survive at all in western
countries.13
Western publishers have been slow to join record, film, and software companies in their aggressive
online enforcement campaigns, and academic publishers even more so. But such efforts are slowly
increasing, as the market for digital texts grows and as publishers benefit from the enforcement
precedents set or won by the more aggressive rightsholder groups. The domain name of [os], one of the
sites mirroring the Aleph collection was seized, apparently due to the legal action taken by a US
rightholder, and it also started to respond to DMCA notices, removing links to books reported to be
infringing. Aleph responds to this with a number of tactical moves:
We want books to be available, but only for those who need them. We do not want [ALEPH] to be visible.
If one knows where to get books, there are here for him or her. In this way we stay relatively invisible (in
search engines, e.g.), but all the relevant communities in the academy know about us. Actually, if you
question people at universities, the percentage of them is quite low. But what's important is that the
news about [ALEPH] is spread mostly by face-to-face communication, where most of the unnecessary
people do not know about it. (Unnecessary are those who aim profit)14
The policy of invisibility is radically different from Moshkov’s policy of maximum visibility. Aleph hopes
that it can recede into the shadows where it will be protected by the omerta of academics sharing the
sharing ethos:
In Russian academia, [Aleph] is tacitly or actively supported. There are people that do not want to be
included, but it is hard to say who they are in most cases. Since there are DMCA complaints, of course
there are people who do not want stuff to appear here. But in our experience the complainers are only
from the non-scientific fellows. […] I haven't seen a single complaint from the authors who should
constitute our major problem: professors etc. No, they don't complain. Who complains are either of such
type I have mentioned or the ever-hungry publishers.15

13

Anonymous source #1
Anonymous source #1
15
Anonymous source #1
14

20

Draft Manuscript, 11/4/2014, DO NOT CITE!
The protection the academic community has to offer may not be enough to fend off the publishers’
enforcement actions. The option to recede further into the darknets and hide behind the veil of privacy
technologies is one option the Aleph site has: the first mirror on I2P, an anonymizing network designed
to hide the whereabouts and identity of web services is already operational. But
[i]f people are physically served court invitations, they will have to close the site. The idea is, however,
that the entire collection is copied throughout the world many times over, the database is open, the code
for the site is open, so other people can continue.16

On methodology
We tried to reconstruct the story behind Aleph by conducting interviews and browsing through the BBS
of the community. Access to the site and community members was given under a strict condition of
anonymity. We thus removed any reference to the names and URLs of the services in question.
At one point we shared an early draft of this paper with interested members and asked for their
feedback. Beyond access and feedback, community members were helping the writing of this article by
providing translations of some Russian originals, as well as reviewing the translations made by the
author. In return, we provided financial contributions to the community, in the value of 100 USD.
We reproduced forum entries without any edits to the language, we, however, edited interviews
conducted via IM services to reflect basic writing standards.

16

Anonymous source #1

21

Draft Manuscript, 11/4/2014, DO NOT CITE!
References

Abelson, H., Diamond, P. A., Grosso, A., & Pfeiffer, D. W. (2013). Report to the President MIT and the
Prosecution of Aaron Swartz. Cambridge, MA. Retrieved from http://swartzreport.mit.edu/docs/report-to-the-president.pdf
Alekseeva, L., Pearce, C., & Glad, J. (1985). Soviet dissent: Contemporary movements for national,
religious, and human rights. Wesleyan University Press.
Bodó, B. (2013). Set the fox to watch the geese: voluntary IP regimes in piratical file-sharing
communities. In M. Fredriksson & J. Arvanitakis (Eds.), Piracy: Leakages from Modernity.
Sacramento, CA: Litwin Books.
Borges, J. L. (1998). The library of Babel. In Collected fictions. New York: Penguin.
Bowers, S. L. (2006). Privacy and Library Records. The Journal of Academic Librarianship, 32(4), 377–383.
doi:http://dx.doi.org/10.1016/j.acalib.2006.03.005
Budylin, S., & Osipova, Y. (2007). Is AllOfMP3 Legal? Non-Contractual Licensing Under Russian Copyright
Law. Journal Of High Technology Law, 7(1).
Bush, V. (1945). As We May Think. Atlantic Monthly.
Dewhirst, M., & Farrell, R. (Eds.). (1973). The Soviet Censorship. Metuchen, NJ: The Scarecrow Press.
Elst, M. (2005). Copyright, freedom of speech, and cultural policy in the Russian Federation.
Leiden/Boston: Martinus Nijhoff.
Ermolaev, H. (1997). Censorship in Soviet Literature: 1917-1991. Rowman & Littlefield.
Foerstel, H. N. (1991). Surveillance in the stacks: The FBI’s library awareness program. New York:
Greenwood Press.
Friedberg, M., Watanabe, M., & Nakamoto, N. (1984). The Soviet Book Market: Supply and Demand.
Acta Slavica Iaponica, 2, 177–192. Retrieved from
http://eprints.lib.hokudai.ac.jp/dspace/bitstream/2115/7941/1/KJ00000034083.pdf
Interview with Dusan Barok. (2013). Neural, 10–11.
Interview with Marcell Mars. (2013). Neural, 6–8.
Komaromi, A. (2004). The Material Existence of Soviet Samizdat. Slavic Review, 63(3), 597–618.
doi:10.2307/1520346

22

Draft Manuscript, 11/4/2014, DO NOT CITE!
Lessig, L. (2013). Aaron’s Laws - Law and Justice in a Digital Age. Cambridge,MA: Harward Law School.
Retrieved from http://www.youtube.com/watch?v=9HAw1i4gOU4
Levin, M. B. (1983). Soviet International Copyright: Dream or Nightmare. Journal of the Copyright Society
of the U.S.A., 31, 127.
Liang, L. (2012). Shadow Libraries. e-flux. Retrieved from http://www.e-flux.com/journal/shadowlibraries/
Newcity, M. A. (1978). Copyright law in the Soviet Union. Praeger.
Newcity, M. A. (1980). Universal Copyright Convention as an Instrument of Repression: The Soviet
Experiment, The. In Copyright L. Symp. (Vol. 24, p. 1). HeinOnline.
Patry, W. F. (2009). Moral panics and the copyright wars. New York: Oxford University Press.
Post, R. (1998). Censorship and Silencing: Practices of Cultural Regulation. Getty Research Institute for
the History of Art and the Humanities.
Rieusset-Lemarié, I. (1997). P. Otlet’s mundaneum and the international perspective in the history of
documentation and information science. Journal of the American Society for Information Science,
48(4), 301–309.
Ryzhak, N. (2005). Censorship in the USSR and the Russian State Library. IFLA/FAIFE Satellite meeting:
Documenting censorship – libraries linking past and present, and preparing for the future.
Sezneva, O., & Karaganis, J. (2011). Chapter 4: Russia. In J. Karaganis (Ed.), Media Piracy in Emerging
Economies. New York: Social Science Research Council.
Skilling, H. G. (1989). Samizdat and an Independent Society in Central and Eastern Europe. Pa[Aleph]rave
Macmillan.
Solzhenitsyn, A. I. (1974). The Gulag Archipelago 1918-1956: An Experiment in Literary Investigation,
Parts I-II. Harper & Row.
Stelmach, V. D. (1993). Reading in Russia: findings of the sociology of reading and librarianship section of
the Russian state library. The International Information & Library Review, 25(4), 273–279.
Stelmakh, V. D. (2001). Reading in the Context of Censorship in the Soviet Union. Libraries & Culture,
36(1), 143–151. doi:10.2307/25548897
Suber, P. (2013). Open Access (Vol. 1). Cambridge, MA: The MIT Press.
doi:10.1109/ACCESS.2012.2226094
UHF. (2005). Где-где - на борде! Хакер, 86–90.

23

Draft Manuscript, 11/4/2014, DO NOT CITE!
Гроер, И. (1926). Авторское право. In Большая Советская Энциклопедия. Retrieved from
http://ru.gse1.wikia.com/wiki/Авторское_право

24

Mars & Medak
Knowledge Commons and Activist Pedagogies
2017


KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES: FROM IDEALIST POSITIONS TO COLLECTIVE ACTIONS
Conversation with Marcell Mars and Tomislav Medak (co-authored with Ana Kuzmanic)

Marcell Mars is an activist, independent scholar, and artist. His work has been
instrumental in development of civil society in Croatia and beyond. Marcell is one
of the founders of the Multimedia Institute – mi2 (1999) (Multimedia Institute,
2016a) and Net.culture club MaMa in Zagreb (2000) (Net.culture club MaMa,
2016a). He is a member of Creative Commons Team Croatia (Creative Commons,
2016). He initiated GNU GPL publishing label EGOBOO.bits (2000) (Monoskop,
2016a), meetings of technical enthusiasts Skill sharing (Net.culture club MaMa,
2016b) and various events and gatherings in the fields of hackerism, digital
cultures, and new media art. Marcell regularly talks and runs workshops about
hacking, free software philosophy, digital cultures, social software, semantic web
etc. In 2011–2012 Marcell conducted research on Ruling Class Studies at Jan Van
Eyck in Maastricht, and in 2013 he held fellowship at Akademie Schloss Solitude
in Stuttgart. Currently, he is PhD researcher at the Digital Cultures Research Lab at
Leuphana Universität Lüneburg.
Tomislav Medak is a cultural worker and theorist interested in political
philosophy, media theory and aesthetics. He is an advocate of free software and
free culture, and the Project Lead of the Creative Commons Croatia (Creative
Commons, 2016). He works as coordinator of theory and publishing activities at
the Multimedia Institute/MaMa (Zagreb, Croatia) (Net.culture club MaMa, 2016a).
Tomislav is an active contributor to the Croatian Right to the City movement
(Pravo na grad, 2016). He interpreted to numerous books into Croatian language,
including Multitude (Hardt & Negri, 2009) and A Hacker Manifesto (Wark,
2006c). He is an author and performer with the internationally acclaimed Zagrebbased performance collective BADco (BADco, 2016). Tomislav writes and talks
about politics of technological development, and politics and aesthetics.
Tomislav and Marcell have been working together for almost two decades.
Their recent collaborations include a number of activities around the Public Library
project, including HAIP festival (Ljubljana, 2012), exhibitions in
Württembergischer Kunstverein (Stuttgart, 2014) and Galerija Nova (Zagreb,
2015), as well as coordinated digitization projects Written-off (2015), Digital
Archive of Praxis and the Korčula Summer School (2016), and Catalogue of
Liberated Books (2013) (in Monoskop, 2016b).
243

CHAPTER 12

Ana Kuzmanic is an artist based in Zagreb and Associate Professor at the
Faculty of Civil Engineering, Architecture and Geodesy at the University in Split
(Croatia), lecturing in drawing, design and architectural presentation. She is a
member of the Croatian Association of Visual Artists. Since 2007 she held more
than a dozen individual exhibitions and took part in numerous collective
exhibitions in Croatia, the UK, Italy, Egypt, the Netherlands, the USA, Lithuania
and Slovenia. In 2011 she co-founded the international artist collective Eastern
Surf, which has “organised, produced and participated in a number of projects
including exhibitions, performance, video, sculpture, publications and web based
work” (Eastern Surf, 2017). Ana's artwork critically deconstructs dominant social
readings of reality. It tests traditional roles of artists and viewers, giving the
observer an active part in creation of artwork, thus creating spaces of dialogue and
alternative learning experiences as platforms for emancipation and social
transformation. Grounded within a postdisciplinary conceptual framework, her
artistic practice is produced via research and expression in diverse media located at
the boundaries between reality and virtuality.
ABOUT THE CONVERSATION

I have known Marcell Mars since student days, yet our professional paths have
crossed only sporadically. In 2013 I asked Marcell’s input about potential
interlocutors for this book, and he connected me to McKenzie Wark. In late 2015,
when we started working on our own conversation, Marcell involved Tomislav
Medak. Marcell’s and Tomislav’s recent works are closely related to arts, so I
requested Ana Kuzmanic’s input in these matters. Since the beginning of the
conversation, Marcell, Tomislav, Ana, and I occasionally discussed its generalities
in person. Yet, the presented conversation took place in a shared online document
between November 2015 and December 2016.
NET.CULTURE AT THE DAWN OF THE CIVIL SOCIETY

Petar Jandrić & Ana Kuzmanic (PJ & AK): In 1999, you established the
Multimedia Institute – mi2 (Multimedia Institute, 2016a); in 2000, you established
the Net.culture club MaMa (both in Zagreb, Croatia). The Net.culture club MaMa
has the following goals:
To promote innovative cultural practices and broadly understood social
activism. As a cultural center, it promotes wide range of new artistic and
cultural practices related in the first place to the development of
communication technologies, as well as new tendencies in arts and theory:
from new media art, film and music to philosophy and social theory,
publishing and cultural policy issues.
As a community center, MaMa is a Zagreb’s alternative ‘living room’ and
a venue free of charge for various initiatives and associations, whether they
are promoting minority identities (ecological, LBGTQ, ethnic, feminist and

244

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

others) or critically questioning established social norms. (Net.culture club
MaMa, 2016a)
Please describe the main challenges and opportunities from the dawn of Croatian
civil society. Why did you decide to establish the Multimedia Institute – mi2 and
the Net.culture club MaMa? How did you go about it?
Marcell Mars & Tomislav Medak (MM & TM): The formative context for
our work had been marked by the process of dissolution of Yugoslavia, ensuing
civil wars, and the rise of authoritarian nationalisms in the early 1990s. Amidst the
general turmoil and internecine bloodshed, three factors would come to define
what we consider today as civil society in the Croatian context. First, the newly
created Croatian state – in its pursuit of ethnic, religious and social homogeneity –
was premised on the radical exclusion of minorities. Second, the newly created
state dismantled the broad institutional basis of social and cultural diversity that
existed under socialism. Third, the newly created state pursued its own nationalist
project within the framework of capitalist democracy. In consequence, politically
undesirable minorities and dissenting oppositional groups were pushed to the
fringes of society, and yet, in keeping with the democratic system, had to be
allowed to legally operate outside of the state, its loyal institutions and its
nationalist consensus – as civil society. Under the circumstances of inter-ethnic
conflict, which put many people in direct or indirect danger, anti-war and human
rights activist groups such as the Anti-War Campaign provided an umbrella under
which political, student and cultural activists of all hues and colours could find a
common context. It is also within this context that the high modernism of cultural
production from the Yugoslav period, driven out from public institutions, had
found its recourse and its continuity.
Our loose collective, which would later come together around the Multimedia
Institute and MaMa, had been decisively shaped by two circumstances. The first
was participation of the Anti-War Campaign, its BBS network ZaMir (Monoskop,
2016c) and in particular its journal Arkzin, in the early European network culture.
Second, the Open Society Institute, which had financed much of the alternative and
oppositional activities during the 1990s, had started to wind down its operations
towards end of the millennium. As the Open Society Institute started to spin off its
diverse activities into separate organizations, giving rise to the Croatian Law
Center, the Center for Contemporary Art and the Center for Drama Art, activities
related to Internet development ended up with the Multimedia Institute. The first
factor shaped us as activists and early adopters of critical digital culture, and the
second factor provided us with an organizational platform to start working
together. In 1998 Marcell was the first person invited to work with the Multimedia
Institute. He invited Vedran Gulin and Teodor Celakoski, who in turn invited other
people, and the group organically grew to its present form.
Prior to our coming together around the Multimedia Institute, we have been
working on various projects such as setting up the cyber-culture platform Labinary
in the space run by the artist initiative Labin Art Express in the former miner town
of Labin located in the north-western region of Istria. As we started working
245

CHAPTER 12

together, however, we began to broaden these activities and explore various
opportunities for political and cultural activism offered by digital networks. One of
the early projects was ‘Radioactive’ – an initiative bringing together a broad group
of activists, which was supposed to result in a hybrid Internet/FM radio. The radio
never arrived into being, yet the project fostered many follow-up activities around
new media and activism in the spirit of ‘don’t hate the media, become the media.’
In these early days, our activities had been strongly oriented towards technological
literacy and education; also, we had a strong interest in political theory and
philosophy. Yet, the most important activity at that time was opening the
Net.culture club MaMa in Zagreb in 2000 (Net.culture club MaMa, 2016a).
PJ & AK: What inspired you to found the Net.culture club MaMa?
MM & TM: We were not keen on continuing the line of work that the
Multimedia Institute was doing under the Open Society Institute, which included,
amongst other activities, setting up the first non-state owned Internet service
provider ZamirNet. The growing availability of Internet access and computer
hardware had made the task of helping political, cultural and media activists get
online less urgent. Instead, we thought that it would be much more important to
open a space where those activists could work together. At the brink of the
millennium, institutional exclusion and access to physical resources (including
space) needed for organizing, working together and presenting that work was a
pressing problem. MaMa was one of the only three independent cultural spaces in
Zagreb – capital city of Croatia, with almost one million inhabitants! The Open
Society Institute provided us with a grant to adapt a former downtown leather-shop
in the state of disrepair and equip it with latest technology ranging from servers to
DJ decks. These resources were made available to all members of the general
public free of charge. Immediately, many artists, media people, technologists, and
political activists started initiating own programs in MaMa. Our activities ranged
from establishing art servers aimed at supporting artistic and cultural projects on
the Internet (Monoskop, 2016d) to technology-related educational activities,
cultural programs, and publishing. By 2000, nationalism had slowly been losing its
stranglehold on our society, and issues pertaining to capitalist globalisation had
arrived into prominence. At MaMa, the period was marked by alter-globalization,
Indymedia, web development, East European net.art and critical media theory.
The confluence of these interests and activities resulted in many important
developments. For instance, soon after the opening of MaMa in 2000, a group of
young music producers and enthusiasts kicked off a daily music program with live
acts, DJ sessions and meetings to share tips and tricks about producing electronic
music. In parallel, we had been increasingly drawn to free software and its
underlying ethos and logic. Yugoslav legacy of social ownership over means of
production and worker self-management made us think how collectivized forms of
cultural production, without exclusions of private property, could be expanded
beyond the world of free software. We thus talked some of our musician friends
into opening the free culture label EGOBOO.bits and publishing their music,
together with films, videos and literary texts of other artists, under the GNU
General Public License. The EGOBOO.bits project had soon become uniquely
246

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

successful: producers such as Zvuk broda, Blashko, Plazmatick, Aesqe, No Name
No Fame, and Ghetto Booties were storming the charts, the label gradually grew to
fifty producers and formations, and we had the artists give regular workshops in
DJ-ing, sound editing, VJ-ing, video editing and collaborative writing at schools
and our summer camp Otokultivator. It inspired us to start working on alternatives
to the copyright regime and on issues of access to knowledge and culture.
PJ & AK: The civil society is the collective conscious, which provides leverage
against national and corporate agendas and serves as a powerful social corrective.
Thus, at the outbreak of the US invasion to Iraq, Net.culture club MaMa rejected a
$100 000 USAID grant because the invasion was:
a) a precedent based on the rationale of pre-emptive war, b) being waged in
disregard of legitimate processes of the international community, and c)
guided by corporate interests to control natural resources (Multimedia
Institute, 2003 in Razsa, 2015: 82).
Yet, only a few weeks later, MaMa accepted a $100 000 grant from the German
state – and this provoked a wide public debate (Razsa, 2015; Kršić, 2003; Stubbs,
2012).
Now that the heat of the moment has gone down, what is your view to this
debate? More generally, how do you decide whose money to accept and whose
money to reject? How do you decide where to publish, where to exhibit, whom to
work with? What is the relationship between idealism and pragmatism in your
work?
MM & TM: Our decision seems justified yet insignificant in the face of the
aftermath of that historical moment. The unilateral decision of US and its allies to
invade Iraq in March 2003 encapsulated both the defeat of global protest
movements that had contested the neoliberal globalisation since the early 1990s
and the epochal carnage that the War on Terror, in its never-ending iterations, is
still reaping today. Nowadays, the weaponized and privatized security regime
follows the networks of supply chains that cut across the logic of borders and have
become vital both for the global circuits of production and distribution (see Cowen,
2014). For the US, our global policeman, the introduction of unmanned weaponry
and all sorts of asymmetric war technologies has reduced the human cost of war
down to zero. By deploying drones and killer robots, it did away with the
fundamental reality check of own human casualties and made endless war
politically plausible. The low cost of war has resulted in the growing side-lining of
international institutions responsible for peaceful resolution of international
conflicts such as the UN.
Our 2003 decision carried hard consequences for the organization. In a capitalist
society, one can ensure wages either by relying on the market, or on the state, or on
private funding. The USAID grant was our first larger grant after the initial spinoff money from the Open Society Institute, and it meant that we could employ
some people from our community over the period of next two years. Yet at the
same time, the USAID had become directly involved in Iraq, aiding the US forces
and various private contractors such as Halliburton in the dispossession and
247

CHAPTER 12

plunder of the Iraqi economy. Therefore, it was unconscionable to continue
receiving money from them. In light of its moral and existential weight, the
decision to return the money thus had to be made by the general assembly of our
association.
People who were left without wages were part and parcel of the community that
we had built between 2000 and 2003, primarily through Otokultivator Summer
Camps and Summer Source Camp (Tactical Tech Collective, 2016). The other
grant we would receive later that year, from the Federal Cultural Foundation of the
German government, was split amongst a number of cultural organizations and
paid for activities that eventually paved the way for Right to the City (Pravo na
grad, 2016). However, we still could not pay the people who decided to return
USAID money, so they had to find other jobs. Money never comes without
conditionalities, and passing judgements while disregarding specific economic,
historic and organizational context can easily lead to apolitical moralizing.
We do have certain principles that we would not want to compromise – we do
not work with corporations, we are egalitarian in terms of income, our activities are
free for the public. In political activities, however, idealist positions make sense
only for as long as they are effective. Therefore, our idealism is through and
through pragmatic. It is in the similar manner that we invoke the ideal of the
library. We are well aware that reality is more complex than our ideals. However,
the collective sense of purpose inspired by an ideal can carry over into useful
collective action. This is the core of our interest …
PJ & AK: There has been a lot of water under the bridge since the 2000s. From
a ruined post-war country, Croatia has become an integral part of the European
Union – with all associated advantages and problems. What are the main today’s
challenges in maintaining the Multimedia Institute and its various projects? What
are your future plans?
MM & TM: From the early days, Multimedia Institute/MaMa took a twofold
approach. It has always supported people working in and around the organization
in their heterogeneous interests including but not limited to digital technology and
information freedoms, political theory and philosophy, contemporary digital art,
music and cinema. Simultaneously, it has been strongly focused to social and
institutional transformation.
The moment zero of Croatian independence in 1991, which was marked by war,
ethnic cleansing and forceful imposition of contrived mono-national identity, saw
the progressive and modernist culture embracing the political alternative of antiwar movement. It is within these conditions, which entailed exclusion from access
to public resources, that the Croatian civil society had developed throughout the
1990s. To address this denial of access to financial and spatial resources to civil
society, since 2000 we have been organizing collective actions with a number of
cultural actors across the country to create alternative routes for access to resources
– mutual support networks, shared venues, public funding, alternative forms of
funding. All the while, that organizational work has been implicitly situated in an
understanding of commons that draws on two sources – the social contract of the
free software community, and the legacy of social ownership under socialism.
248

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

Later on, this line of work has been developed towards intersectional struggles
around spatial justice and against privatisation of public services that coalesced
around the Right to the City movement (2007 till present) (Pravo na grad, 2016)
and the 2015 Campaign against the monetization of the national highway network.
In early 2016, with the arrival of the short-lived Croatian government formed by
a coalition of inane technocracy and rabid right wing radicals, many institutional
achievements of the last fifteen years seemed likely to be dismantled in a matter of
months. At the time of writing this text, the collapse of broader social and
institutional context is (again) an imminent threat. In a way, our current situation
echoes the atmosphere of Yugoslav civil wars in 1990s. Yet, the Croatian turn to
the right is structurally parallel to recent turn to the right that takes place in most
parts of Europe and the world at large. In the aftermath of the global neoliberal
race to the bottom and the War on Terror, the disenfranchised working class vents
its fears over immigration and insists on the return of nationalist values in various
forms suggested by irresponsible political establishments. If they are not spared the
humiliating sense of being outclassed and disenfranchised by the neoliberal race to
the bottom, why should they be sympathetic to those arriving from the
impoverished (semi)-periphery or to victims of turmoil unleashed by the endless
War on Terror? If globalisation is reducing their life prospects to nothing, why
should they not see the solution to their own plight in the return of the regime of
statist nationalism?
At the Multimedia Institute/MaMa we intend to continue our work against this
collapse of context through intersectionalist organizing and activism. We will
continue to do cultural programs, publish books, and organise the Human Rights
Film Festival. In order to articulate, formulate and document years of practical
experience, we aim to strengthen our focus on research and writing about cultural
policy, technological development, and political activism. Memory of the
World/Public Library project will continue to develop alternative infrastructures
for access, and develop new and existing networks of solidarity and public
advocacy for knowledge commons.
LOCAL HISTORIES AND GLOBAL REALITIES

PJ & AK: Your interests and activities are predominantly centred around
information and communication technologies. Yet, a big part of your social
engagement takes place in Eastern Europe, which is not exactly on the forefront of
technological innovation. Can you describe the dynamics of working from the
periphery around issues developed in global centres of power (such as the Silicon
Valley)?
MM & TM: Computers in their present form had been developed primarily in
the Post-World War II United States. Their development started from the military
need to develop mathematics and physics behind the nuclear weapons and counterair defense, but soon it was combined with efforts to address accounting, logistics
and administration problems in diverse fields such as commercial air traffic,
governmental services, banks and finances. Finally, this interplay of the military
249

CHAPTER 12

and the economy was joined by enthusiasts, hobbyists, and amateurs, giving the
development of (mainframe, micro and personal) computer its final historical
blueprint. This story is written in canonical computing history books such as The
Computer Boys Take Over: Computers, Programmers, and the Politics of
Technical Expertise. There, Nathan Ensmenger (2010: 14) writes: “the term
computer boys came to refer more generally not simply to actual computer
specialists but rather to the whole host of smart, ambitious, and technologically
inclined experts that emerged in the immediate postwar period.”
Very few canonical computing history books cover other histories. But when
that happens, we learn a lot. Be that Slava Gerovitch’s From Newspeak to
Cyberspeak (2002), which recounts the history of Soviet cybernetics, or Eden
Medina’s Cybernetic Revolutionaries (2011), which revisits the history of socialist
cybernetic project in Chile during Allende’s government, or the recent book by
Benjamin Peters How Not to Network a Nation (2016), which describes the history
of Soviet development of Internet infrastructure. Many (other) histories are yet to
be heard and written down. And when these histories get written down, diverse
things come into view: geopolitics, class, gender, race, and many more.
With their witty play and experiments with the medium, the early days of the
Internet were highly exciting. Big corporate websites were not much different from
amateur websites and even spoofs. A (different-than-usual) proximity of positions
of power enabled by the Internet allowed many (media-art) interventions, (rebirth
of) manifestos, establishment of (pseudo)-institutions … In these early times of
Internet’s history and geography, (the Internet subculture of) Eastern Europe
played a very important part. Inspired by Alexei Shulgin, Lev Manovich wrote ‘On
Totalitarian Interactivity’ (1996) where he famously addressed important
differences between understanding of the Internet in the West and the East. For the
West, claims Manovich, interactivity was a perfect vehicle for the ideas of
democracy and equality. For the East, however, interactivity was merely another
form of (media) manipulation. Twenty years later, it seems that Eastern Europe
was well prepared for what the Internet would become today.
PJ & AK: The dominant (historical) narrative of information and
communication technologies is predominantly based in the United States.
However, Silicon Valley is not the only game in town … What are the main
differences between approaches to digital technologies in the US and in Europe?
MM & TM: In the ninties, the lively European scene, which equally included
the East Europe, was the centre of critical reflection on the Internet and its
spontaneous ‘Californian ideology’ (Barbrook & Cameron, 1996). Critical culture
in Europe and its Eastern ‘countries in transition’ had a very specific institutional
landscape. In Western Europe, art, media, culture and ‘post-academic’ research in
humanities was by and large publicly funded. In Eastern Europe, development of
the civil society had been funded by various international foundations such as the
Open Society Institute aka the Soros Foundation. Critical new media and critical
art scene played an important role in that landscape. A wide range of initiatives,
medialabs, mailing lists, festivals and projects like Next5minutes (Amsterdam/
Rotterdam), Nettime & Syndicate (mailing lists), Backspace & Irational.org
250

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

(London), Ljudmila (Ljubljana), Rixc (Riga), C3 (Budapest) and others constituted
a loose network of researchers, theorists, artists, activists and other cultural
workers.
This network was far from exclusively European. It was very well connected to
projects and initiatives from the United States such as Critical Art Ensemble,
Rhizome, and Thing.net, to projects in India such as Sarai, and to struggles of
Zapatistas in Chiapas. A significant feature of this loose network was its mutually
beneficial relationship with relevant European art festivals and institutions such as
Documenta (Kassel), Transmediale/HKW (Berlin) or Ars Electronica (Linz). As a
rule of thumb, critical new media and art could only be considered in a conceptual
setup of hybrid institutions, conferences, forums, festivals, (curated) exhibitions
and performances – and all of that at once! The Multimedia Institute was an active
part of that history, so it is hardly a surprise that the Public Library project took a
similar path of development and contextualization.
However, European hacker communities were rarely hanging out with critical
digital culture crowds. This is not the place to extensively present the historic
trajectory of different hacker communities, but risking a gross simplification here
is a very short genealogy. The earliest European hacker association was the
German Chaos Computer Club (CCC) founded in 1981. Already in the early
1980s, CCC started to publicly reveal (security) weaknesses of corporate and
governmental computer systems. However, their focus on digital rights, privacy,
cyberpunk/cypherpunk, encryption, and security issues prevailed over other forms
of political activism. The CCC were very successful in raising issues, shaping
public discussions, and influencing a wide range of public actors from digital rights
advocacy to political parties (such as Greens and Pirate Party). However, unlike the
Italian and Spanish hackers, CCC did not merge paths with other social and/or
political movements. Italian and Spanish hackers, for instance, were much more
integral to autonomist/anarchist, political and social movements, and they have
kept this tradition until the present day.
PJ & AK: Can you expand this analysis to Eastern Europe, and ex-Yugoslavia
in particular? What were the distinct features of (the development of) hacker
culture in these areas?
MM & TM: Continuing to risk a gross simplification in the genealogy, Eastern
European hacker communities formed rather late – probably because of the
turbulent economic and political changes that Eastern Europe went through after
1989.
In MaMa, we used to run the programme g33koskop (2006–2012) with a goal to
“explore the scope of (term) geek” (Multimedia Institute, 2016b). An important
part of the program was to collect stories from enthusiasts, hobbyists, or ‘geeks’
who used to be involved in do-it-yourself communities during early days of
(personal) computing in Yugoslavia. From these makers of first 8-bit computers,
editors of do-it-yourself magazines and other early day enthusiasts, we could learn
that technical and youth culture was strongly institutionally supported (e.g. with
nation-wide clubs called People’s Technics). However, the socialist regime did not
adequately recognize the importance and the horizon of social changes coming
251

CHAPTER 12

from (mere) education and (widely distributed) use of personal computers. Instead,
it insisted on an impossible mission of own industrial computer production in order
to preserve autonomy on the global information technology market. What a
horrible mistake … To be fair, many other countries during this period felt able to
achieve own, autonomous production of computers – so the mistake has reflected
the spirit of the times and the conditions of uneven economic and scientific
development.
Looking back on the early days of computing in former Yugoslavia, many geeks
now see themselves as social visionaries and the avant-garde. During the 1990s
across the Eastern Europe, unfortunately, they failed to articulate a significant
political agenda other than fighting the monopoly of telecom companies. In their
daily lives, most of these people enjoyed opportunities and privileges of working in
a rapidly growing information technology market. Across the former Yugoslavia,
enthusiasts had started local Linux User Groups: HULK in Croatia, LUGOS in
Slovenia, LUGY in Serbia, Bosnia and Hercegovina, and Macedonia. In the spirit
of their own times, many of these groups focused on attempts to convince the
business that free and open source software (at the time GNU/Linux, Apache,
Exim …) was a viable IT solution.
PJ & AK: Please describe further developments in the struggle between
proponents of proprietary software and the Free Software Movement.
MM & TM: That was the time before Internet giants such as Google, Amazon,
eBay or Facebook built their empires on top of Free/Libre/Open Source Software.
GNU General Public Licence, with its famous slogan “free as in free speech, not
free as in free beer” (Stallman, 2002), was strong enough to challenge the property
regime of the world of software production. Meanwhile, Silicon Valley
experimented with various approaches against the challenge of free software such
as ‘tivoizations’ (systems that incorporate copyleft-based software but impose
hardware restrictions to software modification), ‘walled gardens’ (systems where
carriers or service providers control applications, content and media, while
preventing them from interacting with the wider Internet ecosystem), ‘software-asa-service’ (systems where software is hosted centrally and licensed through
subscription). In order to support these strategies of enclosure and turn them into
profit, Silicon Valley developed investment strategies of venture capital or
leveraged buyouts by private equity to close the proprietary void left after the
success of commons-based peer production projects, where a large number of
people develop software collaboratively over the Internet without the exclusion by
property (Benkler, 2006).
There was a period when it seemed that cultural workers, artists and hackers
would follow the successful model of the Free Software Movement and build a
universal commons-based platform for peer produced, shared and distributed
culture, art, science and knowledge – that was the time of the Creative Commons
movement. But that vision never materialized. It did not help, either, that start-ups
with no business models whatsoever (e.g. De.lic.io.us (bookmarks), Flickr
(photos), Youtube (videos), Google Reader (RSS aggregator), Blogspot, and
others) were happy to give their services for free, let contributors use Creative
252

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

Commons licences (mostly on the side of licenses limiting commercial use and
adaptations), let news curators share and aggregate relevant content, and let Time
magazine claim that “You” (meaning “All of us”) are The Person of the Year
(Time Magazine, 2006).
PJ & AK: Please describe the interplay between the Free Software Movement
and the radically capitalist Silicon Valley start-up culture, and place it into the
larger context of political economy of software development. What are its
consequences for the hacker movement?
MM & TM: Before the 2008 economic crash, in the course of only few years,
most of those start-ups and services had been sold out to few business people who
were able to monetize their platforms, users and usees (mostly via advertisement)
or crowd them out (mostly via exponential growth of Facebook and its ‘magic’
network effect). In the end, almost all affected start-ups and services got shut down
(especially those bought by Yahoo). Nevertheless, the ‘golden’ corporate start-up
period brought about a huge enthusiasm and the belief that entrepreneurial spirit,
fostered either by an individual genius or by collective (a.k.a. crowd) endeavour,
could save the world. During that period, unsurprisingly, the idea of hacker
labs/spaces exploded.
Fabulous (self)replicating rapid prototypes, 3D printers, do-it-yourself, the
Internet of Things started to resonate with (young) makers all around the world.
Unfortunately, GNU GPL (v.3 at the time) ceased to be a priority. The
infrastructure of free software had become taken for granted, and enthusiastic
dancing on the shoulders of giants became the most popular exercise. Rebranding
existing Unix services (finger > twitter, irc > slack, talk > im), and/or designing the
‘last mile’ of user experience (often as trivial as adding round corners to the
buttons), would often be a good enough reason to enclose the project, do the
slideshow pitch, create a new start-up backed up by an angel investor, and hope to
win in the game of network effect(s).
Typically, software stack running these projects would be (almost) completely
GNU GPL (server + client), but parts made on OSX (endorsed for being ‘true’
Unix under the hood) would stay enclosed. In this way, projects would shift from
the world of commons to the world of business. In order to pay respect to the open
source community, and to keep own reputation of ‘the good citizen,’ many
software components would get its source code published on GitHub – which is a
prime example of that game of enclosure in its own right. Such developments
transformed the hacker movement from a genuine political challenge to the
property regime into a science fiction fantasy that sharing knowledge while
keeping hackers’ meritocracy regime intact could fix all world’s problems – if only
we, the hackers, are left alone to play, optimize, innovate and make that amazing
technology!
THE SOCIAL LIFE OF DIGITAL TECHNOLOGIES

PJ & AK: This brings about the old debate between technological determinism
and social determinism, which never seems to go out of fashion. What is your take,
253

CHAPTER 12

as active hackers and social activists, on this debate? What is the role of
(information) technologies in social development?
MM & TM: Any discussion of information technologies and social
development requires the following parenthesis: notions used for discussing
technological development are shaped by the context of parallel US hegemony
over capitalist world-system and its commanding role in the development of
information technologies. Today’s critiques of the Internet are far from celebration
of its liberatory, democratizing potential. Instead, they often reflect frustration over
its instrumental role in the expansion of social control. Yet, the binary of freedom
and control (Chun, 2008), characteristic for ideological frameworks pertaining to
liberal capitalist democracies, is increasingly at pains to explain what has become
evident with the creeping commercialization and concentration of market power in
digital networks. Information technologies are no different from other generalpurpose technologies on which they depend – such as mass manufacture, logistics,
or energy systems.
Information technologies shape capitalism – in return, capitalism shapes
information technologies. Technological innovation is driven by interests of
investors to profit from new commodity markets, and by their capacity to optimize
and increase productivity of other sectors of economy. The public has some
influence over development of information technologies. In fact, publicly funded
research and development has created and helped commercialize most of the
fundamental building blocks of our present digital infrastructures ranging from
microprocessors, touch-screens all the way to packet switching networks
(Mazzucato, 2013). However, public influence on commercially matured
information technologies has become limited, driven by imperatives of
accumulation and regulatory hegemony of the US.
When considering the structural interplay between technological development
and larger social systems, we cannot accept the position of technological
determinism – particularly not in the form of Promethean figures of enterpreneurs,
innovators and engineers who can solve the problems of the world. Technologies
are shaped socially, yet the position of outright social determinism is inacceptable
either. The reproduction of social relations depends on contingencies of
technological innovation, just as the transformation of social relations depends on
contingencies of actions by individuals, groups and institutions. Given the
asymmetries that exist between the capitalist core and the capitalist periphery, from
which we hail, strategies for using technologies as agents of social change differ
significantly.
PJ & AK: Based on your activist experience, what is the relationship between
information technologies and democracy?
MM & TM: This relation is typically discussed within the framework of
communicative action (Habermas, 1984 [1981], 1987 [1981]) which describes how
the power to speak to the public has become radically democratized, how digital
communication has coalesced into a global public sphere, and how digital
communication has catalysed the power of collective mobilization. Information
technologies have done all that – but the framework of communicative action
254

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

describes only a part of the picture. Firstly, as Jodi Dean warns us in her critique of
communicative capitalism (Dean, 2005; see also Dean, 2009), the self-referential
intensity of communication frequently ends up as a substitute for the hard (and
rarely rewarding) work of political organization. Secondly, and more importantly,
Internet technologies have created the ‘winner takes all’ markets and benefited
more highly skilled workforce, thus helping to create extreme forms of economic
inequality (Brynjolfsson & McAfee, 2011). Thus, in any list of world’s richest
people, one can find an inordinate number of entrepreneurs from information
technology sector. This feeds deeply into neoliberal transformation of capitalist
societies, with growing (working and unemployed) populations left out of social
welfare which need to be actively appeased or policed. This is the structural
problem behind liberal democracies, electoral successes of the radical right, and
global “Trumpism” (Blyth, 2015). Intrinsic to contemporary capitalism,
information technologies reinforce its contradictions and pave its unfortunate trail
of destruction.
PJ & AK: Access to digital technologies and digital materials is dialectically
intertwined with human learning. For instance, Stallman’s definition of free
software directly addresses this issue in two freedoms: “Freedom 1: The freedom
to study how the program works, and change it to make it do what you wish,” and
“Freedom 3: The freedom to improve the program, and release your improvements
(and modified versions in general) to the public, so that the whole community
benefits” (Stallman, 2002: 43). Please situate the relationship between access and
learning in the contemporary context.
MM & TM: The relationships between digital technologies and education are
marked by the same contradictions and processes of enclosure that have befallen
the free software. Therefore, Eastern European scepticism towards free software is
equally applicable to education. The flip side of interactivity is audience
manipulation; the flip side of access and availability is (economic) domination.
Eroded by raising tuitions, expanding student debt, and poverty-level wages for
adjunct faculty, higher education is getting more and more exclusive. However,
occasional spread of enthusiasm through ideas such as MOOCs does not bring
about more emancipation and equality. While they preach loudly about unlimited
access for students at the periphery, neoliberal universities (backed up by venture
capital) are actually hoping to increase their recruitment business (models).
MOOCs predominantly serve members of privileged classes who already have
access to prestige universities, and who are “self-motivated, self-directed, and
independent individuals who would push to succeed anywhere” (Konnikova,
2014). It is a bit worrying that such rise of inequality results from attempts to
provide materials freely to everyone with Internet access!
The question of access to digital books for public libraries is different. Libraries
cannot afford digital books from world’s largest publishers (Digitalbookworld,
2012), and the small amount of already acquired e-books must destroyed after only
twenty six lendings (Greenfield, 2012). Thus, the issue of access is effectively left
to competition between Amazon, Google, Apple and other companies. The state of
affairs in scientific publishing is not any better. As we wrote in the collective open
255

CHAPTER 12

letter ‘In solidarity with Library Genesis and Sci-Hub’ (Custodians.online, 2015),
five for-profit publishers (Elsevier, Springer, Wiley-Blackwell, Taylor & Francis
and Sage) own more than half of all existing databases of academic material, which
are licensed at prices so scandalously high that even Harvard, the richest university
of the Global North, has complained that it cannot afford them any longer. Robert
Darnton, the past director of Harvard Library, says: “We faculty do the research,
write the papers, referee papers by other researchers, serve on editorial boards, all
of it for free … and then we buy back the results of our labor at outrageous prices.”
For all the work supported by public money benefiting scholarly publishers,
particularly the peer review that grounds their legitimacy, prices of journal articles
prohibit access to science to many academics – and all non-academics – across the
world, and render it a token of privilege (Custodians.online, 2015).
PJ & AK: Please describe the existing strategies for struggle against these
developments. What are their main strengths and weaknesses?
MM & TM: Contemporary problems in the field of production, access,
maintenance and distribution of knowledge regulated by globally harmonized
intellectual property regime have brought about tremendous economic, social,
political and institutional crisis and deadlock(s). Therefore, we need to revisit and
rethink our politics, strategies and tactics. We could perhaps find inspiration in the
world of free software production, where it seems that common effort, courage and
charming obstinacy are able to build alternative tools and infrastructures. Yet, this
model might be insufficient for the whole scope of crisis facing knowledge
production and dissemination. The aforementioned corporate appropriations of free
software such as ‘tivoizations,’ ‘walled gardens,’ ‘software-as-a-service’ etc. bring
about the problem of longevity of commons-based peer-production.
Furthermore, the sense of entitlement for building alternatives to dominant
modes of oppression can only arrive at the close proximity to capitalist centres of
power. The periphery (of capitalism), in contrast, relies on strategies of ‘stealing’
and bypassing socio-economic barriers by refusing to submit to the harmonized
regulation that sets the frame for global economic exchange. If we honestly look
back and try to compare the achievements of digital piracy vs. the achievements of
reformist Creative Commons, it is obvious that the struggle for access to
knowledge is still alive mostly because of piracy.
PJ & AK: This brings us to the struggle against (knowledge as) private
property. What are the main problems in this struggle? How do you go about them?
MM & TM: Many projects addressing the crisis of access to knowledge are
originated in Eastern Europe. Examples include Library Genesis, Science Hub,
Monoskop and Memory of the World. Balázs Bodó’s research (2016) on the ethos
of Library Genesis and Science Hub resonates with our beliefs, shared through all
abovementioned projects, that the concept of private property should not be taken
for granted. Private property can and should be permanently questioned,
challenged and negotiated. This is especially the case in the face of artificial
scarcity (such as lack of access to knowledge caused by intellectual property in
context of digital networks) or selfish speculations over scarce basic human

256

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

resources (such as problems related to housing, water or waterfront development)
(Mars, Medak, & Sekulić, 2016).
The struggle to challenge the property regime used to be at the forefront of the
Free Software Movement. In the spectacular chain of recent events, where the
revelations of sweeping control and surveillance of electronic communications
brought about new heroes (Manning, Assange, Snowden), the hacker is again
reduced to the heroic cypherpunk outlaw. This firmly lies within the old Cold War
paradigm of us (the good guys) vs. them (the bad guys). However, only rare and
talented people are able to master cryptography, follow exact security protocols,
practice counter-control, and create a leak of information. Unsurprisingly, these
people are usually white, male, well-educated, native speakers of English.
Therefore, the narrative of us vs. them is not necessarily the most empowering, and
we feel that it requires a complementary strategy that challenges the property
regime as a whole. As our letter at Custodians.online says:
We find ourselves at a decisive moment. This is the time to recognize that the
very existence of our massive knowledge commons is an act of collective
civil disobedience. It is the time to emerge from hiding and put our names
behind this act of resistance. You may feel isolated, but there are many of us.
The anger, desperation and fear of losing our library infrastructures, voiced
across the Internet, tell us that. This is the time for us custodians, being dogs,
humans or cyborgs, with our names, nicknames and pseudonyms, to raise our
voices. Share your writing – digitize a book – upload your files. Don’t let our
knowledge be crushed. Care for the libraries – care for the metadata – care
for the backup. (Custodians.online, 2015)
FROM CIVIL DISOBEDIENCE TO PUBLIC LIBRARY

PJ & AK: Started in 2012, The Public Library project (Memory of the World,
2016a) is an important part of struggle against commodification of knowledge.
What is the project about; how did it arrive into being?
MM & TM: The Public Library project develops and affirms scenarios for
massive disobedience against current regulation of production and circulation of
knowledge and culture in the digital realm. Started in 2012, it created a lot of
resonance across the peripheries of an unevenly developed world of study and
learning. Earlier that year, takedown of the book-sharing site Library.nu produced
the anxiety that the equalizing effects brought about by piracy would be rolled
back. With the takedown, the fact that access to most recent and most relevant
knowledge was (finally) no longer a privilege of the rich academic institutions in a
few countries of the Global West, and/or the exclusive preserve of the academia to
boot – has simply disappeared into thin air. Certainly, various alternatives from
deep semi-periphery have quickly filled the gap. However, it is almost a miracle
that they still continue to exist in spite of prosecution they are facing on everyday
basis.

257

CHAPTER 12

Our starting point for the Public Library project is simple: public library is the
institutional form devised by societies in order to make knowledge and culture
accessible to all its members regardless their social or economic status. There is a
political consensus across the board that this principle of access is fundamental to
the purpose of a modern society. Only educated and informed citizens are able to
claim their rights and fully participate in the polity for common good. Yet, as
digital networks have radically expanded availability of literature and science,
provision of de-commodified access to digital objects has been by and large denied
to public libraries. For instance, libraries frequently do not have the right to
purchase e-books for lending and preservations. If they do, they are limited in
regards to how many times and under what conditions they can lend digital objects
before the license and the object itself is revoked (Greenfield, 2012). The case of
academic journals is even worse. As journals become increasingly digital, libraries
can provide access and ‘preserve’ them only for as long as they pay extortionate
subscriptions. The Public Library project fills in the space that remains denied to
real-world public libraries by building tools for organizing and sharing electronic
libraries, creating digitization workflows and making books available online.
Obviously, we are not alone in this effort. There are many other platforms, public
and hidden, that help people to share books. And the practice of sharing is massive.
PJ & AK: The Public Library project (Memory of the World, 2016a) is a part of
a wider global movement based, amongst other influences, on the seminal work of
Aaron Swartz. This movement consists of various projects including but not
limited to Library Genesis, Aaaaarg.org, UbuWeb, and others. Please situate The
Public Library project in the wider context of this movement. What are its distinct
features? What are its main contributions to the movement at large?
MM & TM: The Public Library project is informed by two historic moments in
the development of institution of public library The first defining moment
happened during the French Revolution – the seizure of library collections from
aristocracy and clergy, and their transfer to the Bibliothèque Nationale and
municipal libraries of the post-revolutionary Republic. The second defining
moment happened in England through working class struggles to make knowledge
accessible to the working class. After the revolution of 1848, that struggle resulted
in tax-supported public libraries. This was an important part of the larger attempt
by the Chartist movement to provide workers with “really useful knowledge”
aimed at raising class consciousness through explaining functioning of capitalist
domination and exploring ways of building workers’ own autonomous culture
(Johnson, 1988). These defining revolutionary moments have instituted two
principles underpinning the functioning of public libraries: a) general access to
knowledge is fundamental to full participation in the society, and b)
commodification of knowledge in the form of book trade needs to be limited by
public de-commodified non-monetary forms of access through public institutions.
In spite of enormous expansion of potentials for providing access to knowledge
to all regardless of their social status or geographic location brought about by the
digital technologies, public libraries have been radically limited in pursuing their
mission. This results in side-lining of public libraries in enormous expansion of
258

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

commodification of knowledge in the digital realm, and brings huge profits to
academic publishers. In response to these limitations, a number of projects have
sprung up in order to maintain public interest by illegal means.
PJ & AK: Can you provide a short genealogy of these projects?
MM & TM: Founded in 1996, Ubu was one of the first online repositories.
Then, in 2001, Textz.com started distributing texts in critical theory. After
Textz.com got shot down in early 2004, it took another year for Aaaaarg to emerge
and Monoskop followed soon thereafter. In the latter part of the 2000s, Gigapedia
started a different trajectory of providing access to comprehensive repositories.
Gigapedia was a game changer, because it provided access to thousands and
thousands of scholarly titles and made access to that large corpus no longer limited
to those working or studying in the rich institutions of the Global North. In 2012
publishing industry shut down Gigapedia (at the time, it was known as Library.nu).
Fortunately, the resulting vacuum did not last for long, as Library.nu repository got
merged into the holdings of Library Genesis. Building on the legacy of Soviet
scholars who devised the ways of shadow production and distribution of
knowledge in the form of samizdat and early digital distribution of texts in the
post-Soviet period (Balázs, 2014), Library Genesis has built a robust infrastructure
with the mission to provide access to the largest online library in existence while
keeping a low profile. At this moment Library Genesis provides access to books,
and its sister project Science Hub provides access to academic journals. Both
projects are under threat of closure by the largest academic publisher Reed
Elsevier. Together with the Public Library project, they articulate a position of civil
disobedience.
PJ & AK: Please elaborate the position of civil disobedience. How does it
work; when is it justified?
MM & TM: Legitimating discourses usually claim that shadow libraries fall
into the category of non-commercial fair use. These arguments are definitely valid,
yet they do not build a particularly strong ground for defending knowledge
commons. Once they arrive under attack, therefore, shadow libraries are typically
shut down. In our call for collective disobedience, therefore, we want to make a
larger claim. Access to knowledge as a universal condition could not exist if we –
academics and non-academics across the unevenly developed world – did not
create own ways of commoning knowledge that we partake in producing and
learning. By introducing the figure of the custodian, we are turning the notion of
property upside down. Paraphrasing the Little Prince, to own something is to be
useful to that which you own (Saint-Exupéry, 1945). Custodians are the political
subjectivity of that disobedient work of care.
Practices of sharing, downloading, and uploading, are massive. So, if we want to
prevent our knowledge commons from being taken away over and over again, we
need to publicly and collectively stand behind our disobedient behaviour. We
should not fall into the trap of the debate about legality or illegality of our
practices. Instead, we should acknowledge that our practices, which have been
deemed illegal, are politically legitimate in the face of uneven opportunities
between the Global North and the Global South, in the face of commercialization
259

CHAPTER 12

of education and student debt in the Global North … This is the meaning of civil
disobedience – to take responsibility for breaking unjust laws.
PJ & AK: We understand your lack of interest for debating legality –
nevertheless, legal services are very interested in your work … For instance,
Marcell has recently been involved in a law suit related to Aaaaarg. Please describe
the relationship between morality and legality in your (public) engagement. When,
and under which circumstances, can one’s moral actions justify breaking the law?
MM & TM: Marcell has been recently drawn into a lawsuit that was filed
against Aaaaarg for copyright infringement. Marcell, the founder of Aaaaarg Sean
Dockray, and a number of institutions ranging from universities to continentalscale intergovernmental organizations, are being sued by a small publisher from
Quebec whose translation of André Bazin’s What is Cinema? (1967) was twice
scanned and uploaded to Aaaaarg by an unknown user. The book was removed
each time the plaintiff issued a takedown notice, resulting in minimal damages, but
these people are nonetheless being sued for 500.000 Canadian dollars. Should
Aaaaarg not be able to defend its existence on the principle of fair use, a valuable
common resource will yet again be lost and its founder will pay a high price. In this
lawsuit, ironically, there is little economic interest. But many smaller publishers
find themselves squeezed between the privatization of education which leaves
students and adjuncts with little money for books and the rapid concentration of
academic publishing. For instance, Taylor and Francis has acquired a smaller
humanities publisher Ashgate and shut it down in a matter of months (Save
Ashgate Publishing petition, 2015).
The system of academic publishing is patently broken. It syphons off public
funding of science and education into huge private profits, while denying living
wages and access to knowledge to its producers. This business model is legal, but
deeply illegitimate. Many scientists and even governments agree with this
conclusion – yet, situation cannot be easily changed because of entrenched power
passed down from the old models of publishing and their imbrication with
allocation of academic prestige. Therefore, the continuous existence of this model
commands civil disobedience.
PJ & AK: The Public Library project (Memory of the World, 2016a) operates
in various public domains including art galleries. Why did you decide to develop
The Public Library project in the context of arts? How do you conceive the
relationship between arts and activism?
MM & TM: We tend to easily conflate the political with the aesthetic.
Moreover, when an artwork expressedly claims political character, this seems to
grant it recognition and appraisal. Yet, socially reflective character of an artwork
and its consciously critical position toward the social reality might not be outright
political. Political action remains a separate form of agency, which is different than
that of socially reflexive, situated and critical art. It operates along a different logic
of engagement. It requires collective mobilization and social transformation.
Having said that, socially reflexive, situated and critical art cannot remain detached
from the present conjuncture and cannot exist outside the political space. Within
the world of arts, alternatives to existing social sensibilities and realities can be
260

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

articulated and tested without paying a lot of attention to consistency and
plausibility. Whereas activism generally leaves less room for unrestricted
articulation, because it needs to produce real and plausible effects.
With the generous support of the curatorial collective What, How and for Whom
(WHW) (2016), the Public Library project was surprisingly welcomed by the art
world, and this provided us with a stage to build the project, sharpen its arguments
and ascertain legitimacy of its political demands. The project was exhibited, with
WHW and other curators, in some of the foremost art venues such as Reina Sofía
in Madrid, Württembergischer Kunstverein in Stuttgart, 98 Weeks in Beirut,
Museum of Contemporary Art Metelkova in Ljubljana, and Calvert 22 in London.
It is great to have a stage where we can articulate social issues and pursue avenues
of action that other social institutions might find risky to support. Yet, while the
space of art provides a safe haven from the adversarial world of political reality, we
think that the addressed issues need to be politicized and that other institutions,
primarily institutions of education, need to stand behind the demand for universal
access. For instance, teaching and research at the University in Zagreb critically
depends on the capacity of its faculty and students to access books and journals
from sources that are deemed illegal – in our opinion, therefore, the University
needs to take a public stand for these forms of access. In the world of
commercialized education and infringement liability, expecting the University to
publicly support us seems highly improbable. However, it is not impossible! This
was recently demonstrated by the Zürich Academy of Arts, which now hosts a
mirror of Ubu – a crucial resource for its students and faculty alike
(Custodians.online, 2016).
PJ & AK: In the current climate of economic austerity, the question of
resources has become increasingly important. For instance, Web 2.0. has narrowed
available spaces for traditional investigative journalism, and platforms such as
Airbnb and Uber have narrowed spaces for traditional labor. Following the same
line of argument, placing activism into art galleries clearly narrows available
spaces for artists. How do you go about this problem? What, if anything, should be
done with the activist takeover of traditional forms of art? Why?
MM & TM: Art can no longer stand outside of the political space, and it can no
longer be safely stowed away into a niche of supposed autonomy within bourgeois
public sphere detached from commodity production and the state. However, art
academies in Croatia and many other places throughout the world still churn out
artists on the premise that art is apolitical. In this view artists can specialize in a
medium and create in isolation of their studios – if their artwork is recognized as
masterful, it will be bought on the marketplace. This is patently a lie! Art in Croatia
depends on bonds of solidarity and public support.
Frequently it is the art that seeks political forms of engagement rather than vice
versa. A lot of headspace for developing a different social imaginary can be gained
from that venturing aspect of contemporary art. Having said that, art does not need
to be political in order to be relevant and strong.

261

CHAPTER 12

THE DOUBLE LIFE OF HACKER CULTURE

PJ & AK: The Public Library project (Memory of the World, 2016a) is essentially
pedagogical. When everyone is a librarian, and all books are free, living in the
world transforms into living with the world – so The Public Library project is also
essentially anti-capitalist. This brings us to the intersections between critical
pedagogy of Paulo Freire, Peter McLaren, Henry Giroux, and others – and the
hacker culture of Richard Stallman, Linus Torvalds, Steven Lévy, and others. In
spite of various similarities, however, critical pedagogy and hacker culture disagree
on some important points.
With its deep roots in Marxism, critical theory always insists on class analysis.
Yet, imbued in the Californian ideology (Barbrook and Cameron, 1996), the hacker
culture is predominantly individualist. How do you go about the tension between
individualism and collectivism in The Public Library project? How do you balance
these forces in your overall work?
MM & TM: Hacker culture has always lived a double life. Personal computers
and the Internet have set up a perfect projection screen for a mind-set which
understands autonomy as a pursuit for personal self-realisation. Such mind-set sees
technology as a frontier of limitless and unconditional freedom, and easily melds
with entrepreneurial culture of the Silicon Valley. Therefore, it is hardly a surprise
that individualism has become the hegemonic narrative of hacker culture.
However, not all hacker culture is individualist and libertarian. Since the 1990s, the
hacker culture is heavily divided between radical individualism and radical
mutualism. Fred Turner (2006), Richard Barbrook and Andy Cameron (1996) have
famously shown that radical individualism was built on freewheeling counterculture of the American hippie movement, while radical mutualism was built on
collective leftist traditions of anarchism and Marxism. This is evident in the Free
Software Movement, which has placed ethics and politics before economy and
technology. In her superb ethnographic work, Biella Coleman (2013) has shown
that projects such as GNU/Linux distribution Debian have espoused radically
collective subjectivities. In that regard, these projects stand closer to mutualist,
anarchist and communist traditions where collective autonomy is the foundation of
individual freedom.
Our work stands in that lineage. Therefore, we invoke two collective figures –
amateur librarian and custodian. These figures highlight the labor of communizing
knowledge and maintaining infrastructures of access, refuse to leave the commons
to the authority of professions, and create openings where technologies and
infrastructures can be re-claimed for radically collective and redistributive
endeavours. In that context, we are critical of recent attempts to narrow hacker
culture down to issues of surveillance, privacy and cryptography. While these
issues are clearly important, they (again) reframe the hacker community through
the individualist dichotomy of freedom and privacy, and, more broadly, through
the hegemonic discourse of the post-historical age of liberal capitalism. In this
way, the essential building blocks of the hacker culture – relations of production,
relations of property, and issues of redistribution – are being drowned out, and
262

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

collective and massive endeavour of commonizing is being eclipsed by the
capacity of the few crypto-savvy tricksters to avoid government control.
Obviously, we strongly disagree with the individualist, privative and 1337 (elite)
thrust of these developments.
PJ & AK: The Public Library project (Memory of the World, 2016a) arrives
very close to visions of deschooling offered by authors such as Ivan Illich (1971),
Everett Reimer (1971), Paul Goodman (1973), and John Holt (1967). Recent
research indicates that digital technologies offer some fresh opportunities for the
project of deschooling (Hart, 2001; Jandrić, 2014, 2015b), and projects such as
Monoskop (Monoskop, 2016) and The Public Library project (Memory of the
World, 2016a) provide important stepping-stones for emancipation of the
oppressed. Yet, such forms of knowledge and education are hardly – if at all –
recognised by the mainstream. How do you go about this problem? Should these
projects try and align with the mainstream, or act as subversions of the mainstream,
or both? Why?
MM & TM: We are currently developing a more fine-tuned approach to
educational aspects of amateur librarianship. The forms of custodianship over
knowledge commons that underpin the practices behind Monoskop, Public Library,
Aaaaarg, Ubu, Library Genesis, and Science Hub are part and parcel of our
contemporary world – whether you are a non-academic with no access to scholarly
libraries, or student/faculty outside of the few well-endowed academic institutions
in the Global North. As much as commercialization and privatization of education
are becoming mainstream across the world, so are the strategies of reproducing
one’s knowledge and academic research that depend on the de-commodified access
of shadow libraries.
Academic research papers are narrower in scope than textbooks, and Monoskop
is thematically more specific than Library Genesis. However, all these practices
exhibit ways in which our epistemologies and pedagogies are built around
institutional structures that reproduce inequality and differentiated access based on
race, gender, class and geography. By building own knowledge infrastructures, we
build different bodies of knowledge and different forms of relating to our realities –
in words of Walter Mignolo, we create new forms of epistemic disobedience
(2009). Through Public Library, we have digitized and made available several
collections that represent epistemologically different corpuses of knowledge. A
good example of that is the digital collection of books selected by Black Panther
Herman Wallace as his dream library for political education (Memory of the
World, 2016b).
PJ & AK: Your work breaks traditional distinctions between professionals and
amateurs – when everyone becomes a librarian, the concepts of ‘professional
librarian’ and ‘amateur librarian’ become obsolete. Arguably, this tension is an
inherent feature of the digital world – similar trends can be found in various
occupations such as journalism and arts. What are the main consequences of the
new (power) dynamics between professionals and amateurs?
MM & TM: There are many tensions between amateurs and professionals.
There is the general tension, which you refer to as “the inherent feature of the
263

CHAPTER 12

digital world,” but there are also more historically specific tensions. We, amateur
librarians, are mostly interested in seizing various opportunities to politicize and
renegotiate the positions of control and empowerment in the tensions that are
already there. We found that storytelling is a particularly useful, efficient and
engaging way of politicization. The naïve and oft overused claim – particularly
during the Californian nineties – of the revolutionary potential of emerging digital
networks turned out to be a good candidate for replacement by a story dating back
two centuries earlier – the story of emergence of public libraries in the early days
of the French bourgeois revolution in the 19th century.
The seizure of book collections from the Church and the aristocracy in the
course of revolutions casts an interesting light on the tensions between the
professionals and the amateurs. Namely, the seizure of book collections didn’t lead
to an Enlightenment in the understanding of the world – a change in the paradigm
how we humans learn, write and teach each other about the world. Steam engine,
steam-powered rotary press, railroads, electricity and other revolutionary
technological innovations were not seen as results of scientific inquiry. Instead,
they were by and large understood as developments in disciplines such as
mechanics, engineering and practical crafts, which did not challenge religion as the
foundational knowledge about the world.
Consequently, public prayers continued to act as “hoped for solutions to cattle
plagues in 1865, a cholera epidemic in 1866, and a case of typhoid suffered by the
young Prince (Edward) of Wales in 1871” (Gieryn, 1983). Scientists of the time
had to demarcate science from both the religion and the mechanics to provide a
rationale for its supriority as opposed to the domains of spiritual and technical
discovery. Depending on whom they talked to, asserts Thomas F. Gieryn, scientists
would choose to discribe the science as either theoretical or empirical, pure or
applied, often in contradictory ways, but with a clear goal to legitimate to
authorities both the scientific endavor and its claim to resources. Boundary-work of
demarcation had the following characteristics:
(a) when the goal is expansion of authority or expertise into domains claimed
by other professions or occupations, boundary-work heightens the contrast
between rivals in ways flattering to the ideologists’ side;
(b) when the goal is monopolization of professional authority and resources,
boundary-work excludes rivals from within by defining them as outsiders
with labels such as ‘pseudo,’ ‘deviant,’ or ‘amateur’;
(c) when the goal is protection of autonomy over professional activities,
boundary-work exempts members from responsibility for consequences of
their work by putting the blame on scapegoats from outside. (Gieryn, 1983:
791–192)
Once institutionally established, modern science and its academic system have
become the exclusive instances where emerging disciplines had now to seek
recognition and acceptance. The new disciplines (and their respective professions),
in order to become acknowledged by the scientific community as legitimate, had to
264

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

repeat the same boundary-work as the science in general once had to go through
before.
The moral of this story is that the best way for a new scientific discipline to
claim its territory was to articulate the specificity and importance of its insights in a
domain no other discipline claimed. It could achieve that by theorizing,
formalizing, and writing own vocabulary, methods and curricula, and finally by
asking the society to see its own benefit in acknowledging the discipline, its
practitioners and its practices as a separate profession – giving it the green light to
create its own departments and eventually join the productive forces of the world.
This is how democratization of knowledge led to the professionalization of science.
Another frequent reference in our storytelling is the history of
professionalization of computing and its consequences for the fields and disciplines
where the work of computer programmers plays an important role (Ensmenger,
2010: 14; Krajewski, 2011). Markus Krajewski in his great book Paper Machines
(2011), looking back on the history of index card catalog (an analysis that is
formative for our understanding of the significance of library catalog as an
epistemic tool), introduced a thought-provoking idea of the logical equivalence of
the developed index card catalog and the Turing machine, thus making the library a
vanguard of the computing. Granting that equivalence, we however think that the
professionalization of computing much better explains the challenges of today’s
librarianship and tensions between the amateur and professional librarians.
The world recognized the importance and potential of computer technology
much before computer science won its own autonomy in the academia. Computer
science first had to struggle and go through its own historical phase of boundarywork. In 1965 the Association for Computing Machinery (ACM) had decided to
pool together various attempts to define the terms and foundations of computer
science analysis. Still, the field wasn’t given its definition before Donald Knuth
and his colleagues established the algorithm as as the principle unit of analysis in
computer science in the first volume of Knuth’s canonical The Art of Computer
Programming (2011) [1968]. Only once the algorithm was posited as the main unit
of study of computer science, which also served as the basis for ACM’s
‘Curriculum ‘68’ (Atchison et al., 1968), the path was properly paved for the future
departments of computer science in the university.
PJ & AK: What are the main consequences of these stories for computer
science education?
MM & TM: Not everyone was happy with the algorithm’s central position in
computer science. Furthermore, since the early days, computer industry has been
complaining that the university does not provide students with practical
knowledge. Back in 1968, for instance, IBM researcher Hal Sackman said:
new departments of computer science in the universities are too busy
teaching simon-pure courses in their struggle for academic recognition to pay
serious time and attention to the applied work necessary to educate
programmers and systems analysts for the real world. (in Ensmenger, 2010:
133)
265

CHAPTER 12

Computer world remains a weird hybrid where knowledge is produced in both
academic and non-academic settings, through academic curricula – but also
through fairs, informal gatherings, homebrew computer clubs, hacker communities
and the like. Without the enthusiasm and the experiments with ways how
knowledge can be transferred and circulated between peers, we would have
probably never arrived to the Personal Computer Revolution in the beginning of
1980s. Without the amount of personal computers already in use, we would have
probably never experienced the Internet revolution in the beginning of 1990s. It is
through such historical development that computer science became the academic
centre of the larger computer universe which spread its tentacles into almost all
other known disciplines and professions.
PJ & AK: These stories describe the process of professionalization. How do
you go about its mirror image – the process of amateurisation?
MM & TM: Systematization, vocabulary, manuals, tutorials, curricula – all the
processes necessary for achieving academic autonomy and importance in the world
– prime a discipline for automatization of its various skills and workflows into
software tools. That happened to photography (Photoshop, 1990; Instagram, 2010),
architecture (AutoCAD, 1982), journalism (Blogger, 1999; WordPress, 2003),
graphic design (Adobe Illustrator, 1986; Pagemaker, 1987; Photoshop, 1988;
Freehand, 1988), music production (Steinberg Cubase, 1989), and various other
disciplines (Memory of the World, 2016b).
Usually, after such software tool gets developed and introduced into the
discipline, begins the period during which a number of amateurs start to ‘join’ that
profession. An army of enthusiasts with a specific skill, many self-trained and with
understanding of a wide range of software tools, join. This phenomenon often
marks a crisis as amateurs coming from different professional backgrounds start to
compete with certified and educated professionals in that field. Still, the future
development of the same software tools remains under control by software
engineers, who become experts in established workflows, and who promise further
optimizations in the field. This crisis of old professions becomes even more
pronounced if the old business models – and their corporate monopolies – are
challenged by the transition to digital network economy and possibly face the
algorithmic replacement of their workforce and assets.
For professions under these challenging conditions, today it is often too late for
boundary-work described in our earlier answer. Instead of maintaining authority
and expertise by labelling upcoming enthusiasts as ‘pseudo,’ ‘deviant,’ or
‘amateur,’ therefore, contemporary disciplines need to revisit own roots, values,
vision and benefits for society and then (re-)articulate the corpus of knowledge that
the discipline should maintain for the future.
PJ & AK: How does this relate to the dichotomy between amateur and
professional librarians?
MM & TM: We regard the e-book management software Calibre (2016),
written by Kovid Goyal, as a software tool which has benefited from the
knowledge produced, passed on and accumulated by librarians for centuries.
Calibre has made the task of creating and maintaining the catalog easy.
266

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

Our vision is to make sharing, aggregating and accessing catalogs easy and
playful. We like the idea that every rendered catalog is stored on a local hard disk,
that an amateur librarian can choose when to share, and that when she decides to
share, the catalog gets aggregated into a library together with the collections of
other fellow amateur librarians (at https://library.memoryoftheworld.org). For the
purpose of sharing we wrote the Calibre plugin named let’s share books and set up
the related server infrastructure – both of which are easily replicable and
deployable into distributed clones.
Together with Voja Antonić, the legendary inventor of the first eight-bit
computer in Yugoslavia, we also designed and developed a series of book scanners
and used them to digitize hundreds of books focused to Yugoslav humanities such
as the Digital Archive of Praxis and the Korčula Summer School (2016), Catalogue
of Liberated Books (2013), books thrown away from Croatian public libraries
during ideological cleansing of the 1990s Written-off (2015), and the collection of
books selected by the Black Panther Herman Wallace as his dream library for
political education (Memory of the World, 2016b).
In our view, amateur librarians are complementary to professional librarians,
and there is so much to learn and share between each other. Amateur librarians care
about books which are not (yet) digitally curated with curiosity, passion and love;
they dare to disobey in pursuit for the emancipatory vision of the world which is
now under threat. If we, amateur librarians, ever succeed in our pursuits – that
should secure the existing jobs of professional librarians and open up many new
and exciting positions. When knowledge is easily accessed, (re)produced and
shared, there will be so much to follow up upon.
TOWARDS AN ACTIVIST PUBLIC PEDAGOGY

PJ & AK: You organize talks and workshops, publish books, and maintain a major
regional hub for people interested in digital cultures. In Croatia, your names are
almost synonymous with social studies of the digital – worldwide, you are
recognized as regional leaders in the field. Such engagement has a prominent
pedagogical component – arguably, the majority of your work can be interpreted as
public pedagogy. What are the main theoretical underpinnings of your public
pedagogy? How does it work in practice?
MM & TM: Our organization is a cluster of heterogeneous communities and
fields of interest. Therefore, our approaches to public pedagogy hugely vary. In
principle, we subscribe to the idea that all intelligences are equal and that all
epistemology is socially structured. In practice, this means that our activities are
syncretic and inclusive. They run in parallel without falling under the same
umbrella, and they bring together people of varying levels of skill – who bring in
various types of knowledge, and who arrive from various social backgrounds.
Working with hackers, we favour hands-on approach. For a number of years
Marcell has organized weekly Skill Sharing program (Net.culture club MaMa,
2016b) that has started from very basic skills. The bar was incrementally raised to
today’s level of the highly specialized meritocratic community of 1337 hackers. As
267

CHAPTER 12

the required skill level got too demanding, some original members left the group –
yet, the community continues to accommodate geeks and freaks. At the other end,
we maintain a theoretically inflected program of talks, lectures and publications.
Here we invite a mix of upcoming theorists and thinkers and some of the most
prominent intellectuals of today such as Jacques Rancière, Alain Badiou, Saskia
Sassen and Robert McChesney. This program creates a larger intellectual context,
and also provides space for our collaborators in various activities.
Our political activism, however, takes an altogether different approach. More
often than not, our campaigns are based on inclusive planning and direct decision
making processes with broad activist groups and the public. However, such
inclusiveness is usually made possible by a campaigning process that allows
articulation of certain ideas in public and popular mobilization. For instance, before
the Right to the City campaign against privatisation of the pedestrian zone in
Zagreb’s Varšavska Street coalesced together (Pravo na grad, 2016), we tactically
used media for more than a year to clarify underlying issues of urban development
and mobilize broad public support. At its peak, this campaign involved no less than
200 activists involved in the direct decision-making process and thousands of
citizens in the streets. Its prerequisite was hard day-to-day work by a small group
of people organized by the important member of our collective Teodor Celakoski.
PJ & AK: Your public pedagogy provides great opportunity for personal
development – for instance, talks organized by the Multimedia Institute have been
instrumental in shaping our educational trajectories. Yet, you often tackle complex
problems and theories, which are often described using complex concepts and
language. Consequently, your public pedagogy is inevitably restricted to those who
already possess considerable educational background. How do you balance the
popular and the elitist aspects of your public pedagogy? Do you intend to try and
reach wider audiences? If so, how would you go about that?
MM & TM: Our cultural work equally consists of more demanding and more
popular activities, which mostly work together in synergy. Our popular Human
Rights Film Festival (2016) reaches thousands of people; yet, its highly selective
programme echoes our (more) theoretical concerns. Our political campaigns are
intended at scalability, too. Demanding and popular activities do not contradict
each other. However, they do require very different approaches and depend on
different contexts and situations. In our experience, a wide public response to a
social cause cannot be simply produced by shaping messages or promoting causes
in ways that are considered popular. The response of the public primarily depends
on a broadly shared understanding, no matter its complexity, that a certain course
of action has an actual capacity to transform a specific situation. Recognizing that
moment, and acting tactfully upon it, is fundamental to building a broad political
process.
This can be illustrated by the aforementioned Custodians.online letter (2015)
that we recently co-authored with a number of our fellow library activists against
the injunction that allows Elsevier to shut down two most important repositories
providing access to scholarly writing: Science Hub and Library Genesis. The letter
is clearly a product of our specific collective work and dynamic. Yet, it clearly
268

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

articulates various aspects of discontent around this impasse in access to
knowledge, so it resonates with a huge number of people around the world and
gives them a clear indication that there are many who disobey the global
distribution of knowledge imposed by the likes of Elsevier.
PJ & AK: Your work is probably best described by John Holloway’s phrase
“in, against, and beyond the state” (Holloway, 2002, 2016). What are the main
challenges of working under such conditions? How do you go about them?
MM & TM: We could situate the Public Library project within the structure of
tactical agency, where one famously moves into the territory of institutional power
of others. While contesting the regulatory power of intellectual property over
access to knowledge, we thus resort to appropriation of universalist missions of
different social institutions – public libraries, UNESCO, museums. Operating in an
economic system premised on unequal distribution of means, they cannot but fail
to deliver on their universalist promise. Thus, while public libraries have a mission
to provide access to knowledge to all members of the society, they are severely
limited in what they can do to accomplish that mission in the digital realm. By
claiming the mission of universal access to knowledge for shadow libraries,
collectively built shared infrastructures redress the current state of affairs outside of
the territory of institutions. Insofar, these acts of commoning can indeed be
regarded as positioned beyond the state (Holloway, 2002, 2016).
Yet, while shadow libraries can complement public libraries, they cannot
replace public libraries. And this shifts the perspective from ‘beyond’ to ‘in and
against’: we all inhabit social institutions which reflect uneven development in and
between societies. Therefore, we cannot simply operate within binaries: powerful
vs. powerless, institutional vs. tactical. Our space of agency is much more complex
and blurry. Institutions and their employees resist imposed limitations, and
understand that their spaces of agency reach beyond institutional limitations.
Accordingly, the Public Library project enjoys strong and unequivocal complicity
of art institutions, schools and libraries for its causes and activities. While
collectively building practices that abolish the present state of affairs and reclaim
the dream of universal access to knowledge, we rearticulate the vision of a
radically equal society equipped with institutions that can do justice to that
“infinite demand” (Critchley, 2013). We are collectively pursuing this collective
dream – in words of our friend and our continuing inspiration Aaron Swartz: “With
enough of us, around the world, we’ll not just send a strong message opposing the
privatization of knowledge – we’ll make it a thing of the past. Will you join us?”
(Swartz, 2008).


 

Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.