Barok
Techniques of Publishing
2014


Techniques of Publishing

Draft translation of a talk given at the seminar Informace mezi komoditou a komunitou [The Information Between Commodity and Community] held at Tranzitdisplay in Prague, Czech Republic, on May 6, 2014

My contribution has three parts. I will begin by sketching the current environment of publishing in general, move on to some of the specificities of publishing
in the humanities and art, and end with a brief introduction to the Monoskop
initiative I was asked to include in my talk.
I would like to thank Milos Vojtechovsky, Matej Strnad and CAS/FAMU for
the invitation, and Tranzitdisplay for hosting this seminar. It offers itself as an
opportunity for reflection for which there is a decent distance from a previous
presentation of Monoskop in Prague eight years ago when I took part in a new
media education workshop prepared by Miloš and Denisa Kera. Many things
changed since then, not only in new media, but in the humanities in general,
and I will try to articulate some of these changes from today’s perspective and
primarily from the perspective of publishing.

I. The Environment of Publishing
One change, perhaps the most serious, and which indeed relates to the humanities
publishing as well, is that from a subject that was just a year ago treated as a paranoia of a bunch of so called technological enthusiasts, is today a fact with which
the global public is well acquainted: we are all being surveilled. Virtually every
utterance on the internet, or rather made by means of the equipment connected
to it through standard protocols, is recorded, in encrypted or unencrypted form,
on servers of information agencies, besides copies of a striking share of these data
on servers of private companies. We are only at the beginning of civil mobilization towards reversal of the situation and the future is open, yet nothing suggests
so far that there is any real alternative other than “to demand the impossible.”
There are at least two certaintes today: surveillance is a feature of every communication technology controlled by third parties, from post, telegraphy, telephony
to internet; and at the same time it is also a feature of the ruling power in all its
variants humankind has come to know. In this regard, democracy can be also understood as the involvement of its participants in deciding on the scale and use of
information collected in this way.
I mention this because it suggests that also all publishing initiatives, from libraries,
through archives, publishing houses to schools have their online activities, back1

ends, shared documents and email communication recorded by public institutions–
which intelligence agencies are, or at least ought to be.
In regard to publishing houses it is notable that books and other publications today are printed from digital files, and are delivered to print over email, thus it is
not surprising to claim that a significant amount of electronically prepared publications is stored on servers in the public service. This means that besides being
required to send a number of printed copies to their national libraries, in fact,
publishers send their electronic versions to information agencies as well. Obviously, agencies couldn’t care less about them, but it doesn’t change anything on
the likely fact that, whatever it means, the world’s largest electronic repository of
publications today are the server farms of the NSA.
Information agencies archive publications without approval, perhaps without awareness, and indeed despite disapproval of their authors and publishers, as an
“incidental” effect of their surveillance techniques. This situation is obviously
radically different from a totalitarianism we got to know. Even though secret
agencies in the Eastern Bloc were blackmailing people to produce miserable literature as their agents, samizdat publications could at least theoretically escape their
attention.
This is not the only difference. While captured samizdats were read by agents of
flesh and blood, publications collected through the internet surveillance are “read”
by software agents. Both of them scan texts for “signals”, ie. terms and phrases
whose occurrences trigger interpretative mechanisms that control operative components of their organizations.
Today, publishing is similarly political and from the point of view of power a potentially subversive activity like it was in the communist Czechoslovakia. The
difference is its scale, reach and technique.
One of the messages of the recent “revelations” is that while it is recommended
to encrypt private communication, the internet is for its users also a medium of
direct contact with power. SEO, or search engine optimization, is now as relevant technique for websites as for books and other publications since all of them
are read by similar algorithms, and authors can read this situation as a political
dimension of their work, as a challenge to transform and model these algorithms
by texts.

2

II. Techniques of research in the humanities literature
Compiling the bibliography
Through the circuitry we got to the audience, readers. Today, they also include
software and algorithms such as those used for “reading” by information agencies
and corporations, and others facilitating reading for the so called ordinary reader,
the reader searching information online, but also the “expert” reader, searching
primarily in library systems.
Libraries, as we said, are different from information agencies in that they are
funded by the public not to hide publications from it but to provide access to
them. A telling paradox of the age is that on the one hand information agencies
are storing almost all contemporary book production in its electronic version,
while generally they absolutely don’t care about them since the “signal” information lies elsewhere, and on the other in order to provide electronic access, paid or
direct, libraries have to costly scan also publications that were prepared for print
electronically.
A more remarkable difference is, of course, that libraries select and catalogize
publications.
Their methods of selection are determined in the first place by their public institutional function of the protector and projector of patriotic values, and it is reflected
in their preference of domestic literature, ie. literature written in official state languages. Methods of catalogization, on the other hand, are characterized by sorting
by bibliographic records, particularly by categories of disciplines ordered in the
tree structure of knowledge. This results in libraries shaping the research, including academic research, towards a discursivity that is national and disciplinary, or
focused on the oeuvre of particular author.
Digitizing catalogue records and allowing readers to search library indexes by their
structural items, ie. the author, publisher, place and year of publication, words in
title, and disciplines, does not at all revert this tendency, but rather extends it to
the web as well.
I do not intend to underestimate the value and benefits of library work, nor the
importance of discipline-centered writing or of the recognition of the oeuvre of
the author. But consider an author working on an article who in the early phase
of his research needs to prepare a bibliography on the activity of Fluxus in central Europe or on the use of documentary film in education. Such research cuts
through national boundaries and/or branches of disciplines and he is left to travel
not only to locate artefacts, protagonists and experts in the field but also to find
literature, which in turn makes even the mere process of compiling bibliography
relatively demanding and costly activity.
3

In this sense, the digitization of publications and archival material, providing their
free online access and enabling fulltext search, in other words “open access”, catalyzes research across political-geographical and disciplinary configurations. Because while the index of the printed book contains only selected terms and for
the purposes of searching the index across several books the researcher has to have
them all at hand, the software-enabled search in digitized texts (with a good OCR)
works with the index of every single term in all of them.
This kind of research also obviously benefits from online translation tools, multilingual case bibliographies online, as well as second hand bookstores and small
specialized libraries that provide a corrective role to public ones, and whose “open
access” potential has been explored to the very small extent until now, but which
I won’t discuss here further for the lack of time.
Writing
The disciplinarity and patriotism are “embedded” in texts themselves, while I repeat that I don’t say this in a pejorative way.
Bibliographic records in bodies of texts, notes, attributions of sources and appended references can be read as formatted addresses of other texts, making apparent a kind of intertextual structure, well known in hypertext documents. However, for the reader these references are still “virtual”. When following a reference
she is led back to a library, and if interested in more references, to more libraries.
Instead, authors assume certain general erudition of their readers, while following references to their very sources is perceived as an exception from the standard
self-limitation to reading only the body of the text. Techniques of writing with
virtual bibliography thus affirm national-disciplinary discourses and form readers
and authors proficient in the field of references set by collections of local libraries
and so called standard literature of fields they became familiar with during their
studies.
When in this regime of writing someone in the Czech Republic wants to refer to
the work of Gilbert Simondon or Alexander Bogdanov, to give an example, the
effect of his work will be minimal, since there was practically nothing from these
authors translated into Czech. His closely reading colleague is left to try ordering
books through a library and wait for 3-4 weeks, or to order them from an online
store, travel to find them or search for them online. This applies, in the case of
these authors, for readers in the vast majority of countries worldwide. And we can
tell with certainty that this is not only the case of Simondon and Bogdanov but
of the vast majority of authors. Libraries as nationally and pyramidally situated
institutions face real challenges in regard to the needs of free research.
This is surely merely one aspect of techniques of writing.
4

Reading
Reading texts with “live” references and bibliographies using electronic devices is
today possible not only to imagine but to realise as well. This way of reading
allows following references to other texts, visual material, other related texts of
an author, but also working with occurrences of words in the text, etc., bringing
reading closer to textual analysis and other interesting levels. Due to the time
limits I am going to sketch only one example.
Linear reading is specific by reading from the beginning of the text to its end,
as well as ‘tree-like’ reading through the content structure of the document, and
through occurrences of indexed words. Still, techniques of close reading extend
its other aspect – ‘moving’ through bibliographic references in the document to
particular pages or passages in another. They make the virtual reference plastic –
texts are separated one from another merely by a click or a tap.
We are well familiar with a similar movement through the content on the web
– surfing, browsing, and clicking through. This leads us to an interesting parallel: standards of structuring, composing, etc., of texts in the humanities has been
evolving for centuries, what is incomparably more to decades of the web. From
this stems also one of the historical challenges the humanities are facing today:
how to attune to the existence of the web and most importantly to epistemological consequences of its irreversible social penetration. To upload a PDF online is
only a taste of changes in how we gain and make knowledge and how we know.
This applies both ways – what is at stake is not only making production of the
humanities “available” online, it is not only about open access, but also about the
ways of how the humanities realise the electronic and technical reality of their
own production, in regard to the research, writing, reading, and publishing.
Publishing
The analogy between information agencies and national libraries also points to
the fact that large portion of publications, particularly those created in software,
is electronic. However the exceptions are significant. They include works made,
typeset, illustrated and copied manually, such as manuscripts written on paper
or other media, by hand or using a typewriter or other mechanic means, and
other pre-digital techniques such as lithography, offset, etc., or various forms of
writing such as clay tablets, rolls, codices, in other words the history of print and
publishing in its striking variety, all of which provide authors and publishers with
heterogenous means of expression. Although this “segment” is today generally
perceived as artists’ books interesting primarily for collectors, the current process
of massive digitization has triggered the revival, comebacks, transformations and
5

novel approaches to publishing. And it is these publications whose nature is closer
to the label ‘book’ rather than the automated electro-chemical version of the offset
lithography of digital files on acid-free paper.
Despite that it is remarkable to observe a view spreading among publishers that
books created in software are books with attributes we have known for ages. On
top of that there is a tendency to handle files such as PDFs, EPUBs, MOBIs and
others as if they are printed books, even subject to the rules of limited edition, a
consequence of what can be found in the rise of so called electronic libraries that
“borrow” PDF files and while someone reads one, other users are left to wait in
the line.
Whilst, from today’s point of view of the humanities research, mass-printed books
are in the first place archives of the cultural content preserved in this way for the
time we run out of electricity or have the internet ‘switched off’ in some other
way.

III. Monoskop
Finally, I am getting to Monoskop and to begin with I am going to try to formulate
its brief definition, in three versions.
From the point of view of the humanities, Monoskop is a research, or questioning, whose object’s nature renders no answer as definite, since the object includes
art and culture in their widest sense, from folk music, through visual poetry to
experimental film, and namely their history as well as theory and techniques. The
research is framed by the means of recording itself, what makes it a practise whose
record is an expression with aesthetic qualities, what in turn means that the process of the research is subject to creative decisions whose outcomes are perceived
esthetically as well.
In the language of cultural management Monoskop is an independent research
project whose aim is subject to change according to its continual findings; which
has no legal body and thus as organisation it does not apply for funding; its participants have no set roles; and notably, it operates with no deadlines. It has a reach
to the global public about which, respecting the privacy of internet users, there
are no statistics other than general statistics on its social networks channels and a
figure of numbers of people and bots who registered on its website and subscribed
to its newsletter.
At the same time, technically said, Monoskop is primarily an internet website
and in this regard it is no different from any other communication media whose
function is to complicate interpersonal communication, at least due to the fact
that it is a medium with its own specific language, materiality, duration and access.
6

Contemporary media
Monoskop has began ten years ago in the milieu of a group of people running
a cultural space where they had organised events, workshops, discussion, a festival,
etc. Their expertise, if to call that way the trace left after years spent in the higher
education, varied well, and it spanned from fine art, architecture, philosophy,
through art history and literary theory, to library studies, cognitive science and
information technology. Each of us was obviously interested in these and other
fields other than his and her own, but the praxis in naming the substance whose
centripetal effects brought us into collaboration were the terms new media, media
culture and media art.
Notably, it was not contemporary art, because a constituent part of the praxis was
also non-visual expression, information media, etc., so the research began with the
essentially naive question ‘of what are we contemporary?’. There had been not
much written about media culture and art as such, a fact I perceived as drawback
but also as challenge.
The reflection, discussion and critique need to be grounded in reality, in a wider
context of the field, thus the research has began in-field. From the beginning, the
website of Monoskop served to record the environment, including people, groups,
organizations, events we had been in touch with and who/which were more or
less explicitly affiliated with media culture. The result of this is primarily a social
geography of live media culture and art, structured on the wiki into cities, with
a focus on the two recent decades.
Cities and agents
The first aim was to compile an overview of agents of this geography in their
wide variety, from eg. small independent and short-lived initiatives to established
museums. The focus on the 1990s and 2000s is of course problematic. One of
its qualities is a parallel to the history of the World Wide Web which goes back
precisely to the early 1990s and which is on the one hand the primary recording
medium of the Monoskop research and on the other a relevant self-archiving and–
stemming from its properties–presentation medium, in other words a platform on
which agents are not only meeting together but potentially influence one another
as well.
http://monoskop.org/Prague
The records are of diverse length and quality, while the priorities for what they
consist of can be generally summed up in several points in the following order:

7

1. Inclusion of a person, organisation or event in the context of the structure.
So in case of a festival or conference held in Prague the most important is to
mention it in the events section on the page on Prague.
2. Links to their web presence from inside their wiki pages, while it usually
implies their (self-)presentation.
http://monoskop.org/The_Media_Are_With_Us
3. Basic information, including a name or title in an original language, dates
of birth, foundation, realization, relations to other agents, ideally through
links inside the wiki. These are presented in narrative and in English.
4. Literature or bibliography in as many languages as possible, with links to
versions of texts online if there are any.
5. Biographical and other information relevant for the object of the research,
while the preference is for those appearing online for the first time.
6. Audiovisual material, works, especially those that cannot be found on linked
websites.
Even though pages are structured in the quasi same way, input fields are not structured, so when you create a wiki account and decide to edit or add an entry, the
wiki editor offers you merely one input box for the continuous text. As is the case
on other wiki websites. Better way to describe their format is thus articles.
There are many related questions about representation, research methodology,
openness and participation, formalization, etc., but I am not going to discuss them
due to the time constraint.
The first research layer thus consists of live and active agents, relations among
them and with them.
Countries
Another layer is related to a question about what does the field of media culture
and art stem from; what and upon what does it consciously, but also not fully
consciously, builds, comments, relates, negates; in other words of what it may be
perceived a post, meta, anti, retro, quasi and neo legacy.
An approach of national histories of art of the 20th century proved itself to be
relevant here. These entries are structured in the same way like cities: people,
groups, events, literature, at the same time building upon historical art forms and
periods as they are reflected in a range of literature.
8

http://monoskop.org/Czech_Republic
The overviews are organised purposely without any attempts for making relations
to the present more explicit, in order to leave open a wide range of intepretations
and connotations and to encourage them at the same time.
The focus on art of the 20th century originally related to, while the researched
countries were mostly of central and eastern Europe, with foundations of modern
national states, formations preserving this field in archives, museums, collections
but also publications, etc. Obviously I am not saying that contemporary media
culture is necessarily archived on the web while art of the 20th century lies in
collections “offline”, it applies vice versa as well.
In this way there began to appear new articles about filmmakers, fine artists, theorists and other partakers in artistic life of the previous century.
Since then the focus has considerably expanded to more than a century of art and
new media on the whole continent. Still it portrays merely another layer of the
research, the one which is yet a collection of fragmentary data, without much
context. Soon we also hit the limit of what is about this field online. The next
question was how to work in the internet environment with printed sources.
Log
http://monoskop.org/log
When I was installing this blog five years ago I treated it as a side project, an offshoot, which by the fact of being online may not be only an archive of selected
source literature for the Monoskop research but also a resource for others, mainly
students in the humanities. A few months later I found Aaaarg, then oriented
mainly on critical theory and philosophy; there was also Gigapedia with publications without thematic orientation; and several other community library portals
on password. These were the first sources where I was finding relevant literature
in electronic version, later on there were others too, I began to scan books and catalogues myself and to receive a large number of scans by email and soon came to
realise that every new entry is an event of its own not only for myself. According
to the response, the website has a wide usership across all the continents.
At this point it is proper to mention the copyright. When deciding about whether
to include this or that publication, there are at least two moments always present.
One brings me back to my local library at the outskirts of Bratislava in the early
1990s and asks that if I would have found this book there and then, could it change
my life? Because books that did I was given only later and elsewhere; and here I
think of people sitting behind computers in Belarus, China or Kongo. And even
9

if not, the latter is a wonder on whether this text has a potential to open up some
serious questions about disciplinarity or national discursivity in the humanities,
while here I am reminded by a recent study which claims that more than half
of academic publications are not read by more than three people: their author,
reviewer and editor. What does not imply that it is necessary to promote them
to more people but rather to think of reasons why is it so. It seems that the
consequences of the combination of high selectivity with open access resonate
also with publishers and authors from whom the complaints are rather scarce and
even if sometimes I don’t understand reasons of those received, I respect them.
Media technology
Throughout the years I came to learn, from the ontological perspective, two main
findings about media and technology.
For a long time I had a tendency to treat technologies as objects, things, while now
it seems much more productive to see them as processes, techniques. As indeed
nor the biologist does speak about the dear as biology. In this sense technology is
the science of techniques, including cultural techniques which span from reading,
writing and counting to painting, programming and publishing.
Media in the humanities are a compound of two long unrelated histories. One of
them treats media as a means of communication, signals sent from point A to the
point B, lacking the context and meaning. Another speaks about media as artistic
means of expression, such as the painting, sculpture, poetry, theatre, music or
film. The term “media art” is emblematic for this amalgam while the historical
awareness of these two threads sheds new light on it.
Media technology in art and the humanities continues to be the primary object of
research of Monoskop.
I attempted to comment on political, esthetic and technical aspects of publishing.
Let me finish by saying that Monoskop is an initiative open to people and future
and you are more than welcome to take part in it.

Dušan Barok
Written May 1-7, 2014, in Bergen and Prague. Translated by the author on May 10-13,
2014. This version generated June 10, 2014.


Sekulic
Legal Hacking and Space
2015


# Legal hacking and space

## What can urban commons learn from the free software hackers?

* [Dubravka Sekulic](https://www.eurozine.com/authors/sekulic-dubravka/)

4 November 2015

There is now a need to readdress urban commons through the lens of the digital
commons, writes Dubravka Sekulic. The lessons to be drawn from the free
software community and its resistance to the enclosure of code will likely
prove particularly valuable where participation and regulation are concerned.

> Commons are a particular type of institutional arrangement for governing the
use and disposition of resources. Their salient characteristic, which defines
them in contradistinction to property, is that no single person has exclusive
control over the use and disposition of any particular resource. Instead,
resources governed by commons may be used or disposed of by anyone among some
(more or less defined) number of persons, under rules that may range from
"anything goes" to quite crisply articulated formal rules that are effectively
enforced.
> (Benkler 2003: 6)

The above definition of commons, from the seminal paper "The political economy
of commons" by Yochai Benkler, addresses any type of commons, whether analogue
or digital. In fact, the concept of commons entered the digital realm from
physical space in order to interpret the type of communities, relationships
and production that started to appear with the development of the free as
opposed to the proprietary. Peter Linebaugh charted in his excellent book
_Magna Carta Manifesto_ , how the creation and development of the concept of
commons were closely connected to constantly changing relationships of people
and communities to the physical space. Here, I argue that the concept was
enriched when it was implemented in the digital field. Readdressing urban
space through the lens of digital commons can enable another imagination and
knowledge to appear around urban commons.

[![](http://www.eurozine.com/UserFiles/illustrations/sekulic_commons_220w.jpg)](http://www.derive.at/)The
notion of commons in (urban) space is often complicated by archaic models of
organization and management - "the pasture we knew how to share". There is a
tendency to give the impression that the solution is in reverting to the past
models. In the realm of digital though, there is no "pasture" from the Middle
Ages to fall back on. Digital commons had to start from scratch and define its
own protocols of production and reproduction (caring and sharing). Therefore,
the digital commons and free software community can be the one to turn to, not
only for inspiration and advice, but also as a partner when addressing
questions of urban commons. Or, as Marcell Mars would put it "if we could
start again with (regulating and defining) land, knowing what we know now
about digital networks, we could come up with something much better and
appropriate for today's world. That property wouldn't be private, maybe not
even property, but something else. Only then can we say we have learned
something from the digital" (2013).

## Enclosure as the trigger for action

The moment we turn to commons in relation to (urban) space is the moment in
which the pressure to privatize public space and to commodify every aspect of
urban life has become so strong that it can be argued that it mirrors a moment
in which Magna Carta Libertatum was introduced to protect the basic
reproduction of life for those whose sustenance was connected to the common
pastures and forests of England in the thirteenth century. At the end of the
twentieth century, urban space became the ultimate commodity, and increasing
privatization not only endangered the reproduction of everyday life in the
city; the rent extraction through privatized public space and housing
endangered bare life itself. Additionally, the cities' continuous
privatization of its amenities transformed almost every action in the city, no
matter how mundane - as for example, drinking a glass of water from a tap -,
into an action that creates profit for some private entity and extracts it
from the community. Thus every activity became labour, which a citizen-worker
is not only alienated from, but also unaware of. David Harvey's statement
about the city replacing the factory as a site of class war seems to be not
only an apt description of the condition of life in the city, but also a cry
for action.

When Richard Stallman turned to the foundational gesture of the creation of
free software, GNU/GPL (General Public Licence) was his reaction to the
artificially imposed logic of scarcity on the world of code - and the
increasing and systematic enclosure that took place in the late 1970s and
1980s as "a tidal wave of commercialization transformed software from a
technical object into a commodity, to be bought and sold on the open market
under the alleged protection of intellectual property law" (Coleman 2012:
138). Stallman, who worked as a researcher at MIT's Artificial Intelligence
Laboratory, detected how "[m]any programmers are unhappy about the
commercialization of system software. It may enable them to make more money,
but it requires them to feel in conflict with other programmers in general
rather than feel as comrades. The fundamental act of friendship among
programmers is the sharing of programs; marketing arrangements now typically
used essentially forbid programmers to treat others as friends. The purchaser
of software must choose between friendship and obeying the law. Naturally,
many decide that friendship is more important. But those who believe in law
often do not feel at ease with either choice. They become cynical and think
that programming is just a way of making money" (Stallman 2002: 32).

In the period between 1980 and 1984, "one man [Stallman] envisioned a crusade
to change the situation" (Moglen 1999). Stallman understood that in order to
subvert the system, he would have to intervene in the protocols that regulate
the conditions under which the code is produced, and not the code itself;
although he did contribute some of the best lines of code into the compiler
and text editor - the foundational infrastructure for any development. The
gesture that enabled the creation of a free software community that yielded
the complex field of digital commons was not a perfect line of code. The
creation of GNU General Public License (GPL) was a legal hack to counteract
the imposing of intellectual property law on code. At that time, the only
license available for programmers wanting to keep the code free was public
domain, which gave no protection against the code being appropriated and
closed. GPL enabled free codes to become self-perpetuating. Everything built
using a free code had to be made available under the same condition, in order
to secure the freedom for programmers to continue sharing and not breaking the
law. "By working on and using GNU rather than proprietary programs, we can be
hospitable to everyone and obey the law. In addition, GNU serves as an example
to inspire and as a banner to rally others to join in sharing. This can give
us a feeling of harmony, which is impossible if we use software, which is not
free. For about half the programmers I talk to, this is an important happiness
that money cannot replace" (Stallman 2002: 33).

Architects and planners as well as environmental designers have for too long
believed the opposite, that a good enough design can subvert the logic of
enclosure that dominates the production and reproduction of space; that a good
enough design can keep space open and public by the sheer strength of spatial
intervention. Stallman rightfully understands that no design is strong enough
to keep private ownership from claiming what it believes belongs to it.
Digital and urban commons, despite operating in completely different realms
and economies, are under attack from the same threat of "market processes"
that "crucially depend upon the individual monopoly of capitalists (of all
sorts) over ownership of the means of production, including finance and land.
All rent, recall, is a return to the monopoly power of private ownership of
some crucial asset, such as land or a patent. The monopoly power of private
property is therefore both the beginning-point and the end-point of all
capitalist activity" (Harvey 2012: 100). Stallman envisioned a bleak future
(2003: 26-28) but found a way to "relate the means to the ends". He understood
that the emancipatory task of a struggle "is not only what has to be done, but
also how it will be done and who will do it" (Stavrides & De Angelis: 7).
Thus, to produce the necessary requirements - both for a community to emerge,
but also for the basis of future protocols - tools and methodologies are
needed for the community to create both free software and itself.

## Renegotiating (undoing) property, hacking the law, creating community

Property, as an instrument of allocation of resources, is a right that is
negotiated within society and by society and not written in stone or given as
such. The digital, more than any other field, discloses property as being
inappropriate for contemporary relationships between production and
reproduction and, additionally, proves how it is possible to fundamentally
rethink it. The digital offers this possibility as it is non-material, non-
rival and non-exclusive (Meretz 2013), unlike anything in the physical world.
And Elinor Ostrom's lifelong empirical researches give ground to the belief
that eschewing property, being the sole instrument of allocation, can work as
a tool of management even for rival, excludable goods.
The value of information in digital form is not flat, but property is not the
way to protect that value, as the music industry realized during the course of
the last ten years. Once the copy is _out there_ , the cost of protecting its
exclusivity on the grounds of property becomes too high in relation to the
potential value to be extracted. For example, the value is extracted from
information through controlling the moment of its release and not through
subsequent exploitation. Stallman decided to tackle the imposition of the
concept of property on computer code (and by extension to the digital realm as
a whole) by articulating it in another field: just as property is the product
of constant negotiations within a society, so are legal regulations. After
some time, he was joined by "[m]any free software developers [who] do not
consider intellectual property instruments as the pivotal stimulus for a
marketplace of ideas and knowledge. Instead, they see them as a form of
restriction so fundamental (or poorly executed) that they need to be
counteracted through alternative legal agreements that treat knowledge,
inventions, and other creative expressions not as property but rather as
speech to be freely shared, circulated, and modified" (Coleman 2012: 26).

The digital sphere can give a valid example of how renegotiating regulation
can transform a resource from scarce to abundant. When the change from
analogue signal to packet switching begun to take effect, the distribution of
finite territory and the way the radio frequency spectrum was managed got
renegotiated and the amount of slots of space to be allocated grew by an order
of magnitude while the absolute size of the spectrum stayed the same. This
shift enabled Brecht's dream of a two-sided radio to become reality, thus
enabling what he had suggested: "change this apparatus over from distribution
to communication".1

According to Lawrence Lessig, what regulates behavior in cyberspace is an
interdependence of four constraints: market, law, architecture and norms
(Lessig 2012: 121-25). Analogously, space can be put in place of cyberspace,
as the regulation of space is the sum of these four constraints. These four
constraints are in a dynamic relationship in which the balance can be tilted
towards one, depending on how much each of these categories puts pressure on
the other three. Changes in any one reflect the regulation of the whole.
"Architecture" in Lessig's theory should be understood broadly as the "built
environment" that regulates behaviour in (cyber)space. In the last few decades
we have experienced the domination of the market reconfiguring the basis of
norms, law and architecture. In order to counteract this, the other three
constraints need to be re-negotiated. In digital space, this reconfiguration
happened by declaring the code - that is, the set of instructions written as
highly formalized text in a specific programming language to be executed
(usually) by the computer - to be considered as speech in front of the law,
and by hacking the law in order to disrupt the way that property relationships
are formed.

To put it simply, in order to create a change in dynamics between the
architecture, norms and the market, the law had to be addressed first. This is
not a novel procedure, "legal hacking is going on all the time, it is just
that politics is doing it under the veil of legality because they are the
parliament, they are Microsoft, which can hire a whole law firm to defend them
and find all the legal loopholes. Legal hacking is the norm actually" (Bailey
2013). When it comes to physical space, one of the most obvious examples of
the reconfiguration of regulations under the influence of the market is to
create legal provisions, norms and architecture to sustain the concept of
developing (and privatizing) public space through public-private partnerships.
The decision of the Italian parliament that the privatization of services
(specifically of water management) is legal and does not obstruct one's access
to water as a human right, is another example of a crude manipulation of the
law by the state in favour of the market. Unlike legal hacks by corporations
that aim to create a favourable legal climate for another round of
accumulation through dispossession, Stallman's hack tries to limit the impact
of the market and to create a space of freedom for the creation of a code and
of sharable knowledge, by questioning one of the central pillars of liberal
jurisprudence: (intellectual) property law.

Similarly, translated into physical space, one of the initiatives in Europe
that comes closest to creating a real existing urban commons, Teatro Valle
Occupato in Rome, is doing the same, "pushing the borders of legality of
private property" by legally hacking the institution of a foundation to "serve
a public, or common, purpose" and having "notarized [a] document registered
with the Italian state, that creates a precedent for other people to follow in
its way" (Bailey 2013). Sounds familiar to Stallman's hack as the fundamental
gesture by which community and the whole eco-system can be formed.

It is obvious that, in order to create and sustain that type of legal hack, it
is a necessity to have a certain level of awareness and knowledge of how
systems, both political and legal, work, i.e. to be politically literate.
"While in general", says Italian commons-activist and legal scholar Saki
Bailey, "we've become extremely lazy [when it comes to politics]. We've
started to become a kind of society of people who give up their responsibility
to participate by handing it over to some charismatic leaders, experts of [a]
different type" (2013). Free software hackers, in order to understand and take
part in a constant negotiation that takes place on a legal level between the
market that seeks to cloister the code and hackers who want to keep it free,
had to become literate in an arcane legal language. Gabriella Coleman notes in
_Coding Freedom_ that hacker forums sometimes tend to produce legal analysis
that is just as serious as one would expect to find in a law office. Like the
occupants of Teatro Valle, free software hackers understand the importance of
devoting time and energy to understand constraints and to find ways to
structurally divert them.

This type of knowledge is not shared and created in isolation, but in
socialization, in discussions in physical or cyber spaces (such as #irc chat
rooms, forums, mailing lists…), the same way free software hackers share their
knowledge about code. Through this process of socializing knowledge, "the
community is formed, developed, and reproduced through practices focused on
common space. To generalize this principle: the community is developed through
commoning, through acts and forms of organization oriented towards the
production of the common" (Stavrides 2012: 588). Thus forming a community is
another crucial element of the creation of digital commons, but even more
important are its development and resilience. The emerging community was not
given something to manage, it created something together, and together devised
rules of self-regulation and decision-making.

The prime example of this principle in the free software community is the
Debian Project, formed around the development of the Debian Linux
distribution. It is a volunteer organization consisting of around 3,000
developers that since its inception in 1993 has defined a set of basic
principles by which the project and its members conduct their affairs. This
includes the introduction of new people into the community, a process called
Debian Social Contract (DSC). A special part of the DSC defines the criteria
for "free software", thus regulating technical aspects of the project and also
technical relations with the rest of a free software community. The Debian
Constitution, another document created by the community so it can govern
itself, describes the organizational structure for formal decision-making
within the project.

Another example is Wikipedia, where the community that makes the online
encyclopedia also takes part in creating regulations, with some aspects
debated almost endlessly on forums. It is even possible to detect a loose
community of "Internet users" who took to the streets all over the world when
SOPA (Stop Online Piracy Act) and PIPA (Preventing Real Online Threats to
Economic Creativity and Theft of Intellectual Property Act) threatened to
enclose the Internet, as we know it; the proposed legislation was successfully
contested.

Free software projects that represent the core of the digital commons are most
of the time born of the initiative of individuals, but their growth and life
cycle depend on the fact that they get picked up by a community or generate
community around them that is allowed to take part in their regulation and in
decisions about which shape and forms the project will take in the future.
This is an important lesson to be transferred to the physical space in which
many projects fail because they do not get picked up by the intended
community, as the community is not offered a chance to partake in its creation
and, more importantly, its regulation.

## Building common infrastructure and institutions

"The expansion of intellectual property law" as the main vehicle of the trend
to enclose the code that leads to the act of the creation of free software
and, thus, digital commons, "is part and parcel of a broader neoliberal trend
to privatize what was once under public or under the state's aegis, such as
health provision, water delivery, and military services" (Coleman 2012: 16).
The structural fight headed by the GNU/GPL against the enclosure of code
"defines the contractual relationship that serves to secure the freedom of
means of production and to constitute a community of those participating in
the production and reproduction of free resources. And it is this constitutive
character, as an answer to an every time singular situation of appropriation
by the capital, that is a genuine political emancipation striving for an equal
and free collective production" (Mars & Medak 2004). Thus digital commons "is
based on the _communication_ among _singularities_ and emerges through
collaborative social processes of production " (Negri & Hardt 2005: 204).

The most important lesson urban commons can take from its digital counterpart
is at the same time the most difficult one: how to make a structural hack in
the moment of the creation of an urban commons that will enable it to become
structurally self-perpetuating, thus creating fertile ground not only for a
singular spatialization of urban commons to appear, but to multiply and create
a whole new eco-system. Digital commons was the first field in which what
Negri and Hardt (2009: 3-21) called the "republic of property" was challenged.
Urban commons, in order to really emerge as a spatialization of a new type of
relationship, need to start undoing property as well in order to socially re-
appropriate the city. Or in the words of Stavros Stavrides "the most urgent
and promising task, which can oppose the dominant governance model, is the
reinvention of common space. The realm of the common emerges in a constant
confrontation with state-controlled 'authorized' public space. This is an
emergence full of contradictions, perhaps, quite difficult to predict, but
nevertheless necessary. Behind a multifarious demand for justice and dignity,
new roads to collective emancipation are tested and invented. And, as the
Zapatistas say, we can create these roads only while walking. But we have to
listen, to observe, and to feel the walking movement. Together" (Stavrides
2012: 594).

The big task for both digital and urban commons is "[b]uilding a core common
infrastructure [which] is a necessary precondition to allow us to transition
away from a society of passive consumers buying what a small number of
commercial producers are selling. It will allow us to develop into a society
in which all can speak to all, and in which anyone can become an active
participant in political, social and cultural discourse" (Benkler 2003: 9).
This core common infrastructure has to be porous enough to include people that
are not similar, to provide "a ground to build a public realm and give
opportunities for discussing and negotiating what is good for all, rather than
the idea of strengthening communities in their struggle to define their own
commons. Relating commons to groups of "similar" people bears the danger of
eventually creating closed communities. People may thus define themselves as
commoners by excluding others from their milieu, from their own privileged
commons." (Stavrides 2010). If learning carefully from digital commons, urban
commons need to be conceptualized on the basis of the public, with a self-
regulating community that is open for others to join. That socializes
knowledge and thus produces and reproduces the commons, creating a space for
political emancipation that is capable of judicial arguments for the
protection and extension of regulations that are counter-market oriented.

## References

Bailey, Saki (2013): Interview by Dubravka Sekulic and Alexander de Cuveland.

Benkler, Yochai (2003): "The political economy of commons". _Upgrade_ IV, no.
3, 6-9, [www.benkler.org/Upgrade-
Novatica%20Commons.pdf](http://www.benkler.org/Upgrade-
Novatica%20Commons.pdf).

Benkler, Yochai (2006): _The Wealth of Networks: How Social Production
Transforms Markets and Freedom_. New Haven: Yale University Press.

Brecht, Bertolt (2000): "The radio as a communications apparatus". In: _Brecht
on Film and Radio_ , edited by Marc Silberman. Methuen, 41-6.

Coleman, E. Gabriella (2012): _Coding Freedom: The Ethics and Aesthetics of
Hacking_. Princeton University Press / Kindle edition.

Hardt, Michael and Antonio Negri (2005): _Multitude: War and Democracy in the
Age of Empire_. Penguin Books.

Hardt, Michael and Antonio Negri (2011): _Commonwealth_. Belknap Press of
Harvard University Press.

Harvey, David (2012): The Art of Rent. In: _Rebel Cities: From the Right to
the City to the Urban Revolution_ , 1st ed. Verso, 94-118.

Hill, Benjamin Mako (2012): Freedom for Users, Not for Software. In: Bollier,
David & Helfrich, Silke (Ed.): _The Wealth of the Commons: a World Beyond
Market and State_. Levellers Press / E-book.

Lessig, Lawrence (2012): _Code: Version 2.0_. Basic Books.

Linebaugh, Peter (2008): _The Magna Carta Manifesto: Liberties and Commons for
All_. University of California Press.

Mars, Marcell (2013): Interview by Dubravka Sekulic.

Mars, Marcell and Tomislav Medak (2004): "Both devil and gnu",
[www.desk.org:8080/ASU2/newsletter.Zarez.N5M.MedakRomicTXT.EnGlish](http://www.desk.org:8080/ASU2/newsletter.Zarez.N5M.MedakRomicTXT.EnGlish).

Martin, Reinhold (2013): "Public and common(s): Places: Design observer",
[placesjournal.org/article/public-and-
commons](https://placesjournal.org/article/public-and-commons).

Meretz, Stefan (2010): "Commons in a taxonomy of goods", [keimform.de/2010
/commons-in-a-taxonomy-of-goods](http://keimform.de/2010/commons-in-a
-taxonomy-of-goods/).

Mitrasinovic, Miodrag (2006): _Total Landscape, Theme Parks, Public Space_ ,
1st ed. Ashgate.

Moglen, Eben (1999): "Anarchism triumphant: Free software and the death of
copyright", First Monday,
[firstmonday.org/ojs/index.php/fm/article/view/684/594](http://firstmonday.org/ojs/index.php/fm/article/view/684/594).

Stallman, Richard and Joshua Gay (2002): _Free Software, Free Society:
Selected Essays of Richard M. Stallman_. GNU Press.

Stallman, Richard and Joshua Gay (2003): "The Right to Read". _Upgrade_ IV,
no. 3, 26-8.

Stavrides, Stavros (2012) "Squares in movement". _South Atlantic Quarterly_
111, no. 3, 585-96.

Stavrides, Stavros (2013): "Contested urban rhythms: From the industrial city
to the post-industrial urban archipelago". _The Sociological Review_ 61,
34-50.

Stavrides, Stavros, and Massimo De Angelis (2010): "On the commons: A public
interview with Massimo De Angelis and Stavros Stavrides". _e-flux_ 17, 1-17,
[www.e-flux.com/journal/on-the-commons-a-public-interview-with-massimo-de-
angelis-and-stavros-stavrides/](http://www.e-flux.com/journal/on-the-commons-a
-public-interview-with-massimo-de-angelis-and-stavros-stavrides/).

1

"[...] radio is one-sided when it should be two-. It is purely an apparatus
for distribution, for mere sharing out. So here is a positive suggestion:
change this apparatus over from distribution to communication". See "The radio
as a communications apparatus", Brecht 2000.

Published 4 November 2015
Original in English
First published by derive 61 (2015)

Contributed by dérive © Dubravka Sekulic / dérive / Eurozine

[PDF/PRINT](https://www.eurozine.com/legal-hacking-and-space/?pdf)


 

Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.