Murtaugh
A bag but is language nothing of words
2016


## A bag but is language nothing of words

### From Mondotheque

#####

(language is nothing but a bag of words)

[Michael Murtaugh](/wiki/index.php?title=Michael_Murtaugh "Michael Murtaugh")

In text indexing and other machine reading applications the term "bag of
words" is frequently used to underscore how processing algorithms often
represent text using a data structure (word histograms or weighted vectors)
where the original order of the words in sentence form is stripped away. While
"bag of words" might well serve as a cautionary reminder to programmers of the
essential violence perpetrated to a text and a call to critically question the
efficacy of methods based on subsequent transformations, the expression's use
seems in practice more like a badge of pride or a schoolyard taunt that would
go: Hey language: you're nothin' but a big BAG-OF-WORDS.

## Bag of words

In information retrieval and other so-called _machine-reading_ applications
(such as text indexing for web search engines) the term "bag of words" is used
to underscore how in the course of processing a text the original order of the
words in sentence form is stripped away. The resulting representation is then
a collection of each unique word used in the text, typically weighted by the
number of times the word occurs.

Bag of words, also known as word histograms or weighted term vectors, are a
standard part of the data engineer's toolkit. But why such a drastic
transformation? The utility of "bag of words" is in how it makes text amenable
to code, first in that it's very straightforward to implement the translation
from a text document to a bag of words representation. More significantly,
this transformation then opens up a wide collection of tools and techniques
for further transformation and analysis purposes. For instance, a number of
libraries available in the booming field of "data sciences" work with "high
dimension" vectors; bag of words is a way to transform a written document into
a mathematical vector where each "dimension" corresponds to the (relative)
quantity of each unique word. While physically unimaginable and abstract
(imagine each of Shakespeare's works as points in a 14 million dimensional
space), from a formal mathematical perspective, it's quite a comfortable idea,
and many complementary techniques (such as principle component analysis) exist
to reduce the resulting complexity.

What's striking about a bag of words representation, given is centrality in so
many text retrieval application is its irreversibility. Given a bag of words
representation of a text and faced with the task of producing the original
text would require in essence the "brain" of a writer to recompose sentences,
working with the patience of a devoted cryptogram puzzler to draw from the
precise stock of available words. While "bag of words" might well serve as a
cautionary reminder to programmers of the essential violence perpetrated to a
text and a call to critically question the efficacy of methods based on
subsequent transformations, the expressions use seems in practice more like a
badge of pride or a schoolyard taunt that would go: Hey language: you're
nothing but a big BAG-OF-WORDS. Following this spirit of the term, "bag of
words" celebrates a perfunctory step of "breaking" a text into a purer form
amenable to computation, to stripping language of its silly redundant
repetitions and foolishly contrived stylistic phrasings to reveal a purer
inner essence.

## Book of words

Lieber's Standard Telegraphic Code, first published in 1896 and republished in
various updated editions through the early 1900s, is an example of one of
several competing systems of telegraph code books. The idea was for both
senders and receivers of telegraph messages to use the books to translate
their messages into a sequence of code words which can then be sent for less
money as telegraph messages were paid by the word. In the front of the book, a
list of examples gives a sampling of how messages like: "Have bought for your
account 400 bales of cotton, March delivery, at 8.34" can be conveyed by a
telegram with the message "Ciotola, Delaboravi". In each case the reduction of
number of transmitted words is highlighted to underscore the efficacy of the
method. Like a dictionary or thesaurus, the book is primarily organized around
key words, such as _act_ , _advice_ , _affairs_ , _bags_ , _bail_ , and
_bales_ , under which exhaustive lists of useful phrases involving the
corresponding word are provided in the main pages of the volume. [1]

[![Liebers
P1016847.JPG](/wiki/images/4/41/Liebers_P1016847.JPG)](/wiki/index.php?title=File:Liebers_P1016847.JPG)

[![Liebers
P1016859.JPG](/wiki/images/3/35/Liebers_P1016859.JPG)](/wiki/index.php?title=File:Liebers_P1016859.JPG)

[![Liebers
P1016861.JPG](/wiki/images/3/34/Liebers_P1016861.JPG)](/wiki/index.php?title=File:Liebers_P1016861.JPG)

[![Liebers
P1016869.JPG](/wiki/images/f/fd/Liebers_P1016869.JPG)](/wiki/index.php?title=File:Liebers_P1016869.JPG)

> [...] my focus in this chapter is on the inscription technology that grew
parasitically alongside the monopolistic pricing strategies of telegraph
companies: telegraph code books. Constructed under the bywords “economy,”
“secrecy,” and “simplicity,” telegraph code books matched phrases and words
with code letters or numbers. The idea was to use a single code word instead
of an entire phrase, thus saving money by serving as an information
compression technology. Generally economy won out over secrecy, but in
specialized cases, secrecy was also important.[2]

In Katherine Hayles' chapter devoted to telegraph code books she observes how:

> The interaction between code and language shows a steady movement away from
a human-centric view of code toward a machine-centric view, thus anticipating
the development of full-fledged machine codes with the digital computer. [3]

[![Liebers
P1016851.JPG](/wiki/images/1/13/Liebers_P1016851.JPG)](/wiki/index.php?title=File:Liebers_P1016851.JPG)
Aspects of this transitional moment are apparent in a notice included
prominently inserted in the Lieber's code book:

> After July, 1904, all combinations of letters that do not exceed ten will
pass as one cipher word, provided that it is pronounceable, or that it is
taken from the following languages: English, French, German, Dutch, Spanish,
Portuguese or Latin -- International Telegraphic Conference, July 1903 [4]

Conforming to international conventions regulating telegraph communication at
that time, the stipulation that code words be actual words drawn from a
variety of European languages (many of Lieber's code words are indeed
arbitrary Dutch, German, and Spanish words) underscores this particular moment
of transition as reference to the human body in the form of "pronounceable"
speech from representative languages begins to yield to the inherent potential
for arbitrariness in digital representation.

What telegraph code books do is remind us of is the relation of language in
general to economy. Whether they may be economies of memory, attention, costs
paid to a telecommunicatons company, or in terms of computer processing time
or storage space, encoding language or knowledge in any form of writing is a
form of shorthand and always involves an interplay with what one expects to
perform or "get out" of the resulting encoding.

> Along with the invention of telegraphic codes comes a paradox that John
Guillory has noted: code can be used both to clarify and occlude. Among the
sedimented structures in the technological unconscious is the dream of a
universal language. Uniting the world in networks of communication that
flashed faster than ever before, telegraphy was particularly suited to the
idea that intercultural communication could become almost effortless. In this
utopian vision, the effects of continuous reciprocal causality expand to
global proportions capable of radically transforming the conditions of human
life. That these dreams were never realized seems, in retrospect, inevitable.
[5]

[![Liebers
P1016884.JPG](/wiki/images/9/9c/Liebers_P1016884.JPG)](/wiki/index.php?title=File:Liebers_P1016884.JPG)

[![Liebers
P1016852.JPG](/wiki/images/7/74/Liebers_P1016852.JPG)](/wiki/index.php?title=File:Liebers_P1016852.JPG)

[![Liebers
P1016880.JPG](/wiki/images/1/11/Liebers_P1016880.JPG)](/wiki/index.php?title=File:Liebers_P1016880.JPG)

Far from providing a universal system of encoding messages in the English
language, Lieber's code is quite clearly designed for the particular needs and
conditions of its use. In addition to the phrases ordered by keywords, the
book includes a number of tables of terms for specialized use. One table lists
a set of words used to describe all possible permutations of numeric grades of
coffee (Choliam = 3,4, Choliambos = 3,4,5, Choliba = 4,5, etc.); another table
lists pairs of code words to express the respective daily rise or fall of the
price of coffee at the port of Le Havre in increments of a quarter of a Franc
per 50 kilos ("Chirriado = prices have advanced 1 1/4 francs"). From an
archaeological perspective, the Lieber's code book reveals a cross section of
the needs and desires of early 20th century business communication between the
United States and its trading partners.

The advertisements lining the Liebers Code book further situate its use and
that of commercial telegraphy. Among the many advertisements for banking and
law services, office equipment, and alcohol are several ads for gun powder and
explosives, drilling equipment and metallurgic services all with specific
applications to mining. Extending telegraphy's formative role for ship-to-
shore and ship-to-ship communication for reasons of safety, commercial
telegraphy extended this network of communication to include those parties
coordinating the "raw materials" being mined, grown, or otherwise extracted
from overseas sources and shipped back for sale.

## "Raw data now!"

From [La ville intelligente - Ville de la connaissance](/wiki/index.php?title
=La_ville_intelligente_-_Ville_de_la_connaissance "La ville intelligente -
Ville de la connaissance"):

Étant donné que les nouvelles formes modernistes et l'utilisation de matériaux
propageaient l'abondance d'éléments décoratifs, Paul Otlet croyait en la
possibilité du langage comme modèle de « [données
brutes](/wiki/index.php?title=Bag_of_words "Bag of words") », le réduisant aux
informations essentielles et aux faits sans ambiguïté, tout en se débarrassant
de tous les éléments inefficaces et subjectifs.


From [The Smart City - City of Knowledge](/wiki/index.php?title
=The_Smart_City_-_City_of_Knowledge "The Smart City - City of Knowledge"):

As new modernist forms and use of materials propagated the abundance of
decorative elements, Otlet believed in the possibility of language as a model
of '[raw data](/wiki/index.php?title=Bag_of_words "Bag of words")', reducing
it to essential information and unambiguous facts, while removing all
inefficient assets of ambiguity or subjectivity.


> Tim Berners-Lee: [...] Make a beautiful website, but first give us the
unadulterated data, we want the data. We want unadulterated data. OK, we have
to ask for raw data now. And I'm going to ask you to practice that, OK? Can
you say "raw"?

>

> Audience: Raw.

>

> Tim Berners-Lee: Can you say "data"?

>

> Audience: Data.

>

> TBL: Can you say "now"?

>

> Audience: Now!

>

> TBL: Alright, "raw data now"!

>

> [...]

>

> So, we're at the stage now where we have to do this -- the people who think
it's a great idea. And all the people -- and I think there's a lot of people
at TED who do things because -- even though there's not an immediate return on
the investment because it will only really pay off when everybody else has
done it -- they'll do it because they're the sort of person who just does
things which would be good if everybody else did them. OK, so it's called
linked data. I want you to make it. I want you to demand it. [6]

## Un/Structured

As graduate students at Stanford, Sergey Brin and Lawrence (Larry) Page had an
early interest in producing "structured data" from the "unstructured" web. [7]

> The World Wide Web provides a vast source of information of almost all
types, ranging from DNA databases to resumes to lists of favorite restaurants.
However, this information is often scattered among many web servers and hosts,
using many different formats. If these chunks of information could be
extracted from the World Wide Web and integrated into a structured form, they
would form an unprecedented source of information. It would include the
largest international directory of people, the largest and most diverse
databases of products, the greatest bibliography of academic works, and many
other useful resources. [...]

>

> **2.1 The Problem**
> Here we define our problem more formally:
> Let D be a large database of unstructured information such as the World
Wide Web [...] [8]

In a paper titled _Dynamic Data Mining_ Brin and Page situate their research
looking for _rules_ (statistical correlations) between words used in web
pages. The "baskets" they mention stem from the origins of "market basket"
techniques developed to find correlations between the items recorded in the
purchase receipts of supermarket customers. In their case, they deal with web
pages rather than shopping baskets, and words instead of purchases. In
transitioning to the much larger scale of the web, they describe the
usefulness of their research in terms of its computational economy, that is
the ability to tackle the scale of the web and still perform using
contemporary computing power completing its task in a reasonably short amount
of time.

> A traditional algorithm could not compute the large itemsets in the lifetime
of the universe. [...] Yet many data sets are difficult to mine because they
have many frequently occurring items, complex relationships between the items,
and a large number of items per basket. In this paper we experiment with word
usage in documents on the World Wide Web (see Section 4.2 for details about
this data set). This data set is fundamentally different from a supermarket
data set. Each document has roughly 150 distinct words on average, as compared
to roughly 10 items for cash register transactions. We restrict ourselves to a
subset of about 24 million documents from the web. This set of documents
contains over 14 million distinct words, with tens of thousands of them
occurring above a reasonable support threshold. Very many sets of these words
are highly correlated and occur often. [9]

## Un/Ordered

In programming, I've encountered a recurring "problem" that's quite
symptomatic. It goes something like this: you (the programmer) have managed to
cobble out a lovely "content management system" (either from scratch, or using
any number of helpful frameworks) where your user can enter some "items" into
a database, for instance to store bookmarks. After this ordered items are
automatically presented in list form (say on a web page). The author: It's
great, except... could this bookmark come before that one? The problem stems
from the fact that the database ordering (a core functionality provided by any
database) somehow applies a sorting logic that's almost but not quite right. A
typical example is the sorting of names where details (where to place a name
that starts with a Norwegian "Ø" for instance), are language-specific, and
when a mixture of languages occurs, no single ordering is necessarily
"correct". The (often) exascerbated programmer might hastily add an additional
database field so that each item can also have an "order" (perhaps in the form
of a date or some other kind of (alpha)numerical "sorting" value) to be used
to correctly order the resulting list. Now the author has a means, awkward and
indirect but workable, to control the order of the presented data on the start
page. But one might well ask, why not just edit the resulting listing as a
document? Not possible! Contemporary content management systems are based on a
data flow from a "pure" source of a database, through controlling code and
templates to produce a document as a result. The document isn't the data, it's
the end result of an irreversible process. This problem, in this and many
variants, is widespread and reveals an essential backwardness that a
particular "computer scientist" mindset relating to what constitutes "data"
and in particular it's relationship to order that makes what might be a
straightforward question of editing a document into an over-engineered
database.

Recently working with Nikolaos Vogiatzis whose research explores playful and
radically subjective alternatives to the list, Vogiatzis was struck by how
from the earliest specifications of HTML (still valid today) have separate
elements (OL and UL) for "ordered" and "unordered" lists.

> The representation of the list is not defined here, but a bulleted list for
unordered lists, and a sequence of numbered paragraphs for an ordered list
would be quite appropriate. Other possibilities for interactive display
include embedded scrollable browse panels. [10]

Vogiatzis' surprise lay in the idea of a list ever being considered
"unordered" (or in opposition to the language used in the specification, for
order to ever be considered "insignificant"). Indeed in its suggested
representation, still followed by modern web browsers, the only difference
between the two visually is that UL items are preceded by a bullet symbol,
while OL items are numbered.

The idea of ordering runs deep in programming practice where essentially
different data structures are employed depending on whether order is to be
maintained. The indexes of a "hash" table, for instance (also known as an
associative array), are ordered in an unpredictable way governed by a
representation's particular implementation. This data structure, extremely
prevalent in contemporary programming practice sacrifices order to offer other
kinds of efficiency (fast text-based retrieval for instance).

## Data mining

In announcing Google's impending data center in Mons, Belgian prime minister
Di Rupo invoked the link between the history of the mining industry in the
region and the present and future interest in "data mining" as practiced by IT
companies such as Google.

Whether speaking of bales of cotton, barrels of oil, or bags of words, what
links these subjects is the way in which the notion of "raw material" obscures
the labor and power structures employed to secure them. "Raw" is always
relative: "purity" depends on processes of "refinement" that typically carry
social/ecological impact.

Stripping language of order is an act of "disembodiment", detaching it from
the acts of writing and reading. The shift from (human) reading to machine
reading involves a shift of responsibility from the individual human body to
the obscured responsibilities and seemingly inevitable forces of the
"machine", be it the machine of a market or the machine of an algorithm.

From [X = Y](/wiki/index.php?title=X_%3D_Y "X = Y"):

Still, it is reassuring to know that the products hold traces of the work,
that even with the progressive removal of human signs in automated processes,
the workers' presence never disappears completely. This presence is proof of
the materiality of information production, and becomes a sign of the economies
and paradigms of efficiency and profitability that are involved.


The computer scientists' view of textual content as "unstructured", be it in a
webpage or the OCR scanned pages of a book, reflect a negligence to the
processes and labor of writing, editing, design, layout, typesetting, and
eventually publishing, collecting and cataloging [11].

"Unstructured" to the computer scientist, means non-conformant to particular
forms of machine reading. "Structuring" then is a social process by which
particular (additional) conventions are agreed upon and employed. Computer
scientists often view text through the eyes of their particular reading
algorithm, and in the process (voluntarily) blind themselves to the work
practices which have produced and maintain these "resources".

Berners-Lee, in chastising his audience of web publishers to not only publish
online, but to release "unadulterated" data belies a lack of imagination in
considering how language is itself structured and a blindness to the need for
more than additional technical standards to connect to existing publishing
practices.

Last Revision: 2*08*2016

1. ↑ Benjamin Franklin Lieber, Lieber's Standard Telegraphic Code, 1896, New York;
2. ↑ Katherine Hayles, "Technogenesis in Action: Telegraph Code Books and the Place of the Human", How We Think: Digital Media and Contemporary Technogenesis, 2006
3. ↑ Hayles
4. ↑ Lieber's
5. ↑ Hayles
6. ↑ Tim Berners-Lee: The next web, TED Talk, February 2009
7. ↑ "Research on the Web seems to be fashionable these days and I guess I'm no exception." from Brin's [Stanford webpage](http://infolab.stanford.edu/~sergey/)
8. ↑ Extracting Patterns and Relations from the World Wide Web, Sergey Brin, Proceedings of the WebDB Workshop at EDBT 1998,
9. ↑ Dynamic Data Mining: Exploring Large Rule Spaces by Sampling; Sergey Brin and Lawrence Page, 1998; p. 2
10. ↑ Hypertext Markup Language (HTML): "Internet Draft", Tim Berners-Lee and Daniel Connolly, June 1993,
11. ↑

Retrieved from
[https://www.mondotheque.be/wiki/index.php?title=A_bag_but_is_language_nothing_of_words&oldid=8480](https://www.mondotheque.be/wiki/index.php?title=A_bag_but_is_language_nothing_of_words&oldid=8480)

Mattern
Making Knowledge Available
2018


# Making Knowledge Available

## The media of generous scholarship

[Shannon Mattern](http://www.publicseminar.org/author/smattern/ "Posts by
Shannon Mattern") -- [March 22, 2018](http://www.publicseminar.org/2018/03
/making-knowledge-available/ "Permalink to Making Knowledge Available")

[__ 0](http://www.publicseminar.org/2018/03/making-knowledge-
available/#respond)

[__](http://www.facebook.com/sharer.php?u=http%3A%2F%2Fwww.publicseminar.org%2F2018%2F03
%2Fmaking-knowledge-available%2F&t=Making+Knowledge+Available "Share on
Facebook")[__](https://twitter.com/home?status=Making+Knowledge+Available+http%3A%2F%2Fwww.publicseminar.org%2F2018%2F03
%2Fmaking-knowledge-available%2F "Share on
Twitter")[__](https://plus.google.com/share?url=http%3A%2F%2Fwww.publicseminar.org%2F2018%2F03
%2Fmaking-knowledge-available%2F "Share on
Google+")[__](http://pinterest.com/pin/create/button/?url=http%3A%2F%2Fwww.publicseminar.org%2F2018%2F03
%2Fmaking-knowledge-available%2F&media=http://www.publicseminar.org/wp-
content/uploads/2018/03/6749000895_ea0145ed2d_o-150x150.jpg&description=Making
Knowledge Available "Share on Pinterest")

[ ![](http://www.publicseminar.org/wp-content/uploads/2018/03
/6749000895_ea0145ed2d_o-750x375.jpg) ](http://www.publicseminar.org/wp-
content/uploads/2018/03/6749000895_ea0145ed2d_o.jpg "Making Knowledge
Available")

__Visible Knowledge © Jasinthan Yoganathan | Flickr

A few weeks ago, shortly after reading that Elsevier, the world’s largest
academic publisher, had made over €1 billion in profit in 2017, I received
notice of a new journal issue on decolonization and media.* “Decolonization”
denotes the dismantling of imperialism, the overturning of systems of
domination, and the founding of new political orders. Recalling Achille
Mbembe’s exhortation that we seek to decolonize our knowledge production
practices and institutions, I looked forward to exploring this new collection
of liberated learning online – amidst that borderless ethereal terrain where
information just wants to be free. (…Not really.)

Instead, I encountered a gate whose keeper sought to extract a hefty toll: $42
to rent a single article for the day, or $153 to borrow it for the month. The
keeper of that particular gate, mega-publisher Taylor & Francis, like the
keepers of many other epistemic gates, has found toll-collecting to be quite a
profitable business. Some of the largest academic publishers have, in recent
years, achieved profit margins of nearly 40%, higher than those of Apple and
Google. Granted, I had access to an academic library and an InterLibrary Loan
network that would help me to circumvent the barriers – yet I was also aware
of just how much those libraries were paying for that access on my behalf; and
of all the un-affiliated readers, equally interested and invested in
decolonization, who had no academic librarians to serve as their liaisons.

I’ve found myself standing before similar gates in similar provinces of
paradox: the scholarly book on “open data” that sells for well over $100; the
conference on democratizing the “smart city,” where tickets sell for ten times
as much. Librarian Ruth Tillman was [struck with “acute irony
poisoning”](https://twitter.com/ruthbrarian/status/932701152839454720) when
she encountered a costly article on rent-seeking and value-grabbing in a
journal of capitalism and socialism, which was itself rentable by the month
for a little over $900.

We’re certainly not the first to acknowledge the paradox. For decades, many
have been advocating for open-access publishing, authors have been campaigning
for less restrictive publishing agreements, and librarians have been
negotiating with publishers over exorbitant subscription fees. That fight
continues: in mid-February, over 100 libraries in the UK and Ireland
[submitted a letter](https://www.sconul.ac.uk/page/open-letter-to-the-
management-of-the-publisher-taylor-francis) to Taylor & Francis protesting
their plan to lock up content more than 20 years old and sell it as a separate
package.

My coterminous discoveries of Elsevier’s profit and that decolonization-
behind-a-paywall once again highlighted the ideological ironies of academic
publishing, prompting me to [tweet
something](https://twitter.com/shannonmattern/status/969418644240420865) half-
baked about academics perhaps giving a bit more thought to whether the
politics of their publishing  _venues_  – their media of dissemination –
matched the politics they’re arguing for in their research. Maybe, I proposed,
we aren’t serving either ourselves or our readers very well by advocating for
social justice or “the commons” – or sharing progressive research on labor
politics and care work and the elitism of academic conventions – in journals
that extract huge profits from free labor and exploitative contracts and fees.

Despite my attempt to drown my “call to action” in a swamp of rhetorical
conditionals – “maybe” I was “kind-of” hedging “just a bit”? – several folks
quickly, and constructively, pointed out some missing nuances in my tweet.
[Librarian and LIS scholar Emily Drabinski
noted](https://twitter.com/edrabinski/status/969629307147563008) the dangers
of suggesting that individual “bad actors” are to blame for the hypocrisies
and injustices of a broken system – a system that includes authors, yes, but
also publishers of various ideological orientations, libraries, university
administrations, faculty review committees, hiring committees, accreditors,
and so forth.

And those authors are not a uniform group. Several junior scholars replied to
say that they think  _a lot_  about the power dynamics of academic publishing
(many were “hazed,” at an early age, into the [Impact
Factor](https://en.wikipedia.org/wiki/Impact_factor) Olympics, encouraged to
obsessively count citations and measure “prestige”). They expressed a desire
to experiment with new modes and media of dissemination, but lamented that
they had to bracket their ethical concerns and aesthetic aspirations. Because
tenure. Open-access publications, and more-creative-but-less-prestigious
venues, “don’t count.” Senior scholars chimed in, too, to acknowledge that
scholars often publish in different venues at different times for different
purposes to reach different audiences (I’d add, as well, that some
conversations need to happen in enclosed, if not paywalled, environments
because “openness” can cultivate dangerous vulnerabilities). Some also
concluded that, if we want to make “open access” and public scholarship – like
that featured in  _Public Seminar_  – “count,” we’re in for a long battle: one
that’s best waged within big professional scholarly associations. Even then,
there’s so much entrenched convention – so many naturalized metrics and
administrative structures and cultural habits – that we’re kind-of stuck with
these rentier publishers (to elevate the ingrained irony: in August 2017,
Elsevier acquired bepress, an open-access digital repository used by many
academic institutions). They need our content and labor, which we willing give
away for free, because we need their validation even more.

All this is true. Still, I’d prefer to think that we  _can_ actually resist
rentierism, reform our intellectual infrastructures, and maybe even make some
progress in “decolonizing” the institution over the next years and decades. As
a mid-career scholar, I’d like to believe that my peers and I, in
collaboration with our junior colleagues and colleagues-to-be, can espouse new
values – which include attention to the political, ethical, and even aesthetic
dimensions of the means and  _media_ through which we do our scholarship – in
our search committees, faculty reviews, and juries. Change  _can_  happen at
the local level; one progressive committee can set an example for another, and
one college can do the same. Change can take root at the mega-institutional
scale, too. Several professional organizations, like the Modern Language
Association and many scientific associations, have developed policies and
practices to validate open-access publishing. We can look, for example, to the
[MLA Commons](https://mla.hcommons.org/) and the [Manifold publishing
platform](https://manifold.umn.edu/). We can also look to Germany, where a
nationwide consortium of libraries, universities, and research institutes has
been battling Elsevier since 2016 over their subscription and access policies.
Librarians have long been advocates for ethical publishing, and [as Drabinski
explains](https://crln.acrl.org/index.php/crlnews/article/view/9568/10924),
they’re equipped to consult with scholars and scholarly organizations about
the publication media and platforms that best reinforce their core values.
Those values are the chief concern of the [HuMetricsHSS
initiative](http://humetricshss.org/about-2/), which is imagining a “more
humane,” values-based framework for evaluating scholarly work.

We also need to acknowledge the work of those who’ve been advocating for
similar ideals – and working toward a more ethically reflective publishing
culture – for years. Let’s consider some examples from the humanities and
social sciences – like the path-breaking [Institute for the Future of the
Book](http://www.futureofthebook.org/), which provided the platform where my
colleague McKenzie Wark publicly edited his [ _Gamer
Theory_](http://futureofthebook.org/gamertheory2.0/) back in 2006. Wark’s book
began online and became a print book, published by Harvard. Several
institutions – MIT; [Minnesota](https://www.upress.umn.edu/book-
division/series/forerunners-ideas-first); [Columbia’s Graduate School of
Architecture, Planning, and Preservation
](https://www.arch.columbia.edu/books)(whose publishing unit is led by a New
School alum, James Graham, who also happens to be a former thesis advisee);
Harvard’s [Graduate School of Design
](http://www.gsd.harvard.edu/publications/)and
[metaLab](http://www.hup.harvard.edu/collection.php?cpk=2006); and The New
School’s own [Vera List Center
](http://www.veralistcenter.org/engage/publications/1993/entry-pointsthe-vera-
list-center-field-guide-on-art-and-social-justice-no-1/)– have been
experimenting with the printed book. And individual scholars and
practitioners, like Nick Sousanis, who [published his
dissertation](http://www.hup.harvard.edu/catalog.php?isbn=9780674744431) as a
graphic novel, regard the bibliographic form as integral to their arguments.

Kathleen Fitzpatrick has also been a vibrant force for change, through her
work with the [MediaCommons](http://mediacommons.futureofthebook.org/) digital
scholarly network, her two [open-review ](http://www.plannedobsolescence.net
/peer-to-peer-review-and-its-aporias/)books, and [her
advocacy](http://www.plannedobsolescence.net/evolving-standards-and-practices-
in-tenure-and-promotion-reviews/) for more flexible, more thoughtful faculty
review standards. Her new manuscript,  _Generous Thinking_ , which lives up to
its name, proposes [public intellectualism
](https://generousthinking.hcommons.org/4-working-in-public/public-
intellectuals/)as one such generous practice and advocates for [its positive
valuation](https://generousthinking.hcommons.org/5-the-university/) within the
academy. “What would be required,” she asks, “for the university to begin
letting go of the notion of prestige and of the competition that creates it in
order to begin aligning its personnel processes with its deepest values?” Such
a realignment, I want to emphasize, need not mean a reduction in rigor, as
some have worried; we can still have standards, while insisting that they
correspond to our values. USC’s Tara McPherson has modeled generous and
careful scholarship through her own work and her collaborations in developing
the [Vectors](http://vectors.usc.edu/issues/index.php?issue=7) and
[Scalar](https://scalar.me/anvc/scalar/) publishing platforms, which launched
in 2005 and 2013, respectively.  _Public Seminar_  is [part of that long
tradition](http://www.publicseminar.org/2017/09/the-life-of-the-mind-online/),
too.

Individual scholars – particularly those who enjoy some measure of security –
can model a different pathway and advocate for a more sane, sustainable, and
inclusive publication and review system. Rather than blaming the “bad actors”
for making bad choices and perpetuating a flawed system, let’s instead
incentive the good ones to practice generosity.

In that spirit, I’d like to close by offering a passage I included in my own
promotion dossier, where I justified my choice to prioritize public
scholarship over traditional peer-reviewed venues. I aimed here to make my
values explicit. While I won’t know the outcome of my review for a few months,
and thus I can’t say whether or not this passage successfully served its
rhetorical purpose, I do hope I’ve convincingly argued here that, in
researching media and technology, one should also think critically about the
media one chooses to make that research public. I share this in the hope that
it’ll be useful to others preparing for their own job searches and faculty
reviews, or negotiating their own politics of practice. The passage is below.

* * *

…[A] concern with public knowledge infrastructures has… informed my choice of
venues for publication. Particularly since receiving tenure I’ve become much
more attuned to publication platforms themselves as knowledge infrastructures.
I’ve actively sought out venues whose operational values match the values I
espouse in my research – openness and accessibility (and, equally important,
good design!) – as well as those that The New School embraces through its
commitment to public scholarship and civic engagement. Thus, I’ve steered away
from those peer-reviewed publications that are secured behind paywalls and
rely on uncompensated editorial labor while their parent companies uphold
exploitative copyright policies and charge exorbitant subscription fees. I’ve
focused instead on open-access venues. Most of my articles are freely
available online, and even my 2015 book,  _Deep Mapping the Media City_ ,
published by the University of Minnesota Press, has been made available
through the Mellon Foundation-funded Manifold open-access publishing platform.
In those cases in which I have been asked to contribute work to a restricted
peer-reviewed journal or costly edited volume, I’ve often negotiated with the
publisher to allow me to “pre-print” my work as an article in an open-access
online venue, or to preview an un-edited copy.

I’ve been invited to address the ethics and epistemologies of scholarly
publishing and pedagogical platforms in a variety of venues, A, B, C, D, and
E. I also often chat with graduate students and junior scholars about their
own “publication politics” and appropriate venues for their work, and I review
their prospectuses and manuscripts.

The most personally rewarding and professionally valuable publishing
experience of my post-tenure career has been my collaboration with  _Places
Journal_ , a highly regarded non-profit, university-supported, open-access
venue for public scholarship on landscape, architecture, urbanism. After
having written thirteen (fifteen by Fall 2017) long-form pieces for  _Places_
since 2012, I’ve effectively assumed their “urban data and mediated spaces”
beat. I work with paid, professional editors who care not only about subject
matter – they’re just as much domain experts as any academic peer reviewer
I’ve encountered – but also about clarity and style and visual presentation.
My research and writing process for  _Places_ is no less time- and labor-
intensive, and the editorial process is no less rigorous, than would be
required for a traditional academic publication, but  _Places_  allows my work
to reach a global, interdisciplinary audience in a timely manner, via a
smartly designed platform that allows for rich illustration. This public
scholarship has a different “impact” than pay-walled publications in prestige
journals. Yet the response to my work on social media, the number of citations
it’s received (in both scholarly and popular literature), and the number of
invitations it’s generated, suggest the significant, if incalculable, value of
such alternative infrastructures for academic publishing. By making my work
open and accessible, I’ve still managed to meet many of the prestige- and
scarcity-driven markers of academic excellence (for more on my work’s impact,
see Appendix A).

_* I’ve altered some details so as to avoid sanctioning particular editors or
authors._

_Shannon Mattern is Associate Professor of Media Studies at The New School and
author of numerous books with University of Minnesota Press. Find her on
twitter[@shannonmattern](http://www.twitter.com/shannonmattern)._


 

Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.