29 August 2013

Signposts for the Future of Computal Media


I would like to begin to outline what I think are some of the important trajectories to keep an eye on in regard to what I increasingly think of as computal media. That is, the broad area dependent on computational processing technologies, or areas soon to be colonised by such technologies.

In order to do this I want to examine a number of key moments that I want to use to structure thinking about the softwarization of media.  By “softwarization”, I means broadly the notion of Andreessen (2011) that “software is eating the world” (see also Berry 2011; Manovich 2013).  Softwarization is then a process of the application of computation (see Schlueter Langdon 2003), in this case, to all forms of historical media, but also in the generation of born-digital media. 

However, this process of softwarization is tentative, multi-directional, contested, and moving on multiple strata at different modularities and speeds. We therefore need to develop critiques of the concepts that drive these processes of softwarization but also to think about what kind of experiences that make the epistemological categories of the computal possible. For example, one feature that distinguishes the computal is its division into surfaces, rough or pleasant, and concealed inaccessible structures. 

It seems to me that this task is rightly one that is a critical undertaking. That is, as an historical materialism that understands the key organising principles of our experience are produced by ideas developed with the array of social forces that human beings have themselves created. This includes understanding the computal subject as an agent dynamically contributing and responding to the world. 

So I want to now look at a number of moments to draw out some of what I think are the key developments to be attentive to in computal media. That is, not the future of new media as such, but rather “possibilities” within computal media, sometimes latent but also apparent. 

The Industrial Internet

A new paradigm called the “industrial internet” is emerging, a computational, real-time streaming ecology that is reconfigured in terms of digital flows, fluidities and movement. In the new industrial internet the paradigmatic metaphor I want to use is real-time streaming technologies and the data flows, processual stream-based engines and the computal interfaces and computal “glue” holding them together. This is the internet of things and the softwarization of everyday life and represents the beginning of a post-digital experience of computation as such.

This calls for us to stop thinking about the digital as something static, discrete and object-like and instead consider 'trajectories' and computational logistics. In hindsight, for example, it is possible to see that new media such as CDs and DVDs were only ever the first step on the road to a truly computational media world. Capturing bits and disconnecting them from wider networks, placing them on plastic discs and stacking them in shops for us to go visit and buy seems bizarrely pedestrian today. 

Taking account of such media and related cultural practices becomes increasing algorithmic and as such media becomes itself mediated via software. At the same time previous media forms are increasingly digitalised and placed in databases, viewed not on original equipment but accessed through software devices, browsers and apps. As all media becomes algorithmic, it is subject to monitoring and control at a level to which we are not accustomed – e.g. Amazon mass deletion of Orwell’s1984 from personal Kindles in 2009 (Stone 2009).

The imminent rolling out of the sensor-based world of the internet of things is underway with companies such as Broadcom developing Wireless Internet Connectivity for Embedded Devices, "WICED Direct will allow OEMs to develop wearable sensors -- pedometers, heart-rate monitors, keycards -- and clothing that transmit everyday data to the cloud via a connected smartphone or tablet" (Seppala 2013). Additionally Apple is developing new technology in this area with its iBeacon software layer which uses Bluetooth Low Energy (BLE) to create location-aware micro-devices, and "can enable a mobile user to navigate and interact with specific regions geofenced by low cost signal emitters that can be placed anywhere, including indoors, and even on moving targets" (Dilger 2013). In fact, the "dual nature of the iBeacons is really interesting as well. We can receive content from the beacons, but we can be them as well" (Kosner 2013).  This relies on Bluetooth version 4.0, also called "Bluetooth Smart", that supports devices that can be powered for many months by a small button battery, and in some cases for years. Indeed,
BLE is especially useful in places (like inside a shopping mall) where GPS location data my not be reliably available. The sensitivity is also greater than either GPS or WiFi triangulation. BLE allows for interactions as far away as 160 feet, but doesn’t require surface contact (Kosner 2013).
These new computational sensors enable Local Positioning Systems (LPS) or micro-location, in contrast to the less precise technology of Global Positioning Systems (GPS). These "location based applications can enable personal navigation and the tracking or positioning of assets" to the centimetre, rather than the metre, and hence have great potential as tracking systems inside buildings and facilities (Feldman 2009).

Bring Your Own Device (BYOD)

This shift also includes the move from relatively static desktop computers to mobile computers and to tablet based devices – consumerisation of tech. Indeed, according to the International Telecommunications Union (ITU 2012: 1), in 2012 there were 6 billion mobile devices (up from 2.7 billion in 2006), with YouTube alone streaming video media of 200 terrabytes per day. Indeed, by the end of 2011, 2.3 billion people (i.e. one in three) were using the Internet (ITU 2012: 3).

Users are creating 1.8 zettabytes of data annually by 2011 and this is expected to grow to 7.9 zettabytes by 2015 (Kalakota 2011). To put this in perspective, a zettabyte is is equal to 1 billion terabytes – clearly at these scales the storage sizes become increasingly difficult for humans to comprehend. A zettabyte is roughly equal in size to twenty-five billion Blu-ray discs or 250 billion DVDs.

The acceptance by users and providers of the consumerisation of technology has also opened up the space for the development of "wearables" and these highly intimate devices are under current development, with the most prominent example being Google Glass. Often low-power devices, making use of the BLE and iBeacon type technologies, they augment our existing devices, such as the mobile phone, rather than outright replacing them, but offer new functionalities, such as fitness monitors, notification interfaces, contextual systems and so forth. 

The Personal Cloud (PC)

These pressures are creating an explosion in data and a corresponding expansion in various forms of digital media (currently uploaded to corporate clouds). As a counter move to the existence of massive centralised corporate systems there is a call for Personal Cloud (PCs), a decentralisation of data from the big cloud providers (Facebook, Google, etc.) into smaller personal spaces (see Personal Cloud 2013). Conceptually this is interesting in relation to BYOD. 

This of course changes our relationship to knowledge, and the forms of knowledge which we keep and are able to use. Archives are increasingly viewed through the lens of computation, both in terms of cataloging and storage but also in terms of remediation and configuration. Practices around these knowledges are also shifting, and as social media demonstrates, new forms of sharing and interaction are made possible. Personal Cloud also has links to decentralised authentication technologies (e.g. DAuth vs OAuth).

Digital Media, Social Reading, Sprints

It has taken digital a lot longer that many had thought to provide a serious challenge to print, but it seems to me that we are now in a new moment in which digital texts enable screen-reading, if it is not an anachronism to still call it that, as a sustained reading practice. The are lots of experiments in this space, e.g. my notion of the “minigraph” (Berry 2013) or the mini-monograph, technical reports, the “multigraph” (McCormick 2013), pamphlets, and so forth. Also new means for writing (e.g. Quip) and social reading and collaborative writing (e.g. Book Sprints)

DIY Encryption and Cypherpunks

Together, these technologies create contours of a new communicational landscape appearing before us, and into which computational media mediates use and interaction. Phones become smart phones and media devices that can identify, monitor and control our actions and behaviour  through anticipatory computing. Whilst seemingly freeing us, we are also increasingly enclosed within an algorithmic cage that attempts to surround us with contextual advertising and behavioural nudges.

One response could be “Critical Encryption Practices”, the dual moment of a form of computal literacy and understanding of encryption technologies and cryptography combined with critical reflexive approaches. Cypherpunk approaches tend towards an individualistic libertarianism, but there remains a critical reflexive space opened up by their practices. Commentators are often dismissive of encryption as a “mere” technical solution to what is also a political problem of widespread surveillance. 

CV Dazzle Make-up, Adam Harvey
However, Critical encryption practices could provide both the political, technical and educative moments required for the kinds of media literacies important today – e.g. in civil society. 

This includes critical treatment of and reflection on crypto-systems such as cryptocurrencies like Bitcoin, and the kinds of cybernetic imaginaries that often accompany them. Critical encryption practices could also develop signaling systems – e.g. new aesthetic and Adam Harvey’s work. 

Augmediated Reality

The idea of supplementing or augmenting reality is being transformed with the notion of “augmediated” technologies (Mann 2001). These are technologies that offer a radical mediation of everyday life via screenic forms (such as “Glass”) to co-construct a computally generated synoptic meta-reality formed of video feeds, augmented technology and real-time streams and notification. Intel’s work of Perceptual Computing is a useful example of this kind of media form. 

The New Aesthetic

These factors raise issues of new aesthetic forms related to the computal. For example, augmediated aesthetics suggests new forms of experience in relation to its aesthetic mediation (Berry et al 2012). The continuing “glitch” digital aesthetic remains interesting in relation to the new aesthetic and aesthetic practice more generally (see Briz 2013). Indeed, the aesthetics of encryption, e.g. “complex monochromatic encryption patterns,” the mediation of encryption etc. offers new ways of thinking about the aesthetic in relation to digital media more generally and the post-digital (see Berry et al 2013)

Bumblehive and Veillance

Within a security setting one of the key aspects is data collection and it comes as no surprise that the US has been at the forefront of rolling out gigantic data archive systems, with the NSA (National Security Agency) building the country’s biggest spy centre at its Utah Data Center (Bamford 2012) – codenamed Bumblehive. This centre has a “capacity that will soon have to be measured in yottabytes, which is 1 trillion terabytes or a quadrillion gigabytes” (Poitras et al 2013). 

This is connected to the notion of the comprehensive collection of data because, “if you're looking for a needle in the haystack, you need a haystack,” according to Jeremy Bash, the former CIA chief of staff. The scale of the data collection is staggering and according to Davies (2013) the UK GCHQ has placed, “more than 200 probes on transatlantic cables and is processing 600m ‘telephone events’ a day as well as up to 39m gigabytes of internet traffic. Veillance - both surveillance and sousveillence are made easier with mobile devices and cloud computing. We face rising challenges for responding to these issues. 

The Internet vs The Stacks

The internet as we tend to think of it has become increasingly colonised by massive corporate technology stacks. These companies, Google, Apple, Facebook, Amazon, Microsoft, are called collectively “The Stacks” (Sterling, quoted in Emami 2012) – vertically integrated giant social media corporations. As Sterling observes,

[There's] a new phenomena that I like to call the Stacks [vertically integrated social media]. And we've got five of them -- Google, Facebook, Amazon, Apple and Microsoft. The future of the stacks is basically to take over the internet and render it irrelevant. They're not hostile to the internet -- they're just [looking after] their own situation. And they all think they'll be the one Stack... and render the others irrelevant... They're annihilating other media... The Lords of the Stacks (Sterling, quoted in Emami 2012).
The Stacks also raise the issue of resistance and what we might call counter-stacks,  hacking the stacks, and movements like Indieweb and Personal Cloud computing are interesting responses to them and Sterling optimistically thinks, "they'll all be rendered irrelevant. That's the future of the Stacks" (Sterling, quoted in Emami 2012). 

The Indieweb

The Indieweb is a kind of DIY response to the Stacks and an attempt to wrestle back some control back from these corporate giants (Finley 2013). These Indieweb developers offer an interesting perspective on what is at stake in the current digital landscape, somewhat idealistic and technically oriented they nonetheless offer a site of critique. They are also notable for “building things”, often small scale, micro-format type things, decentralised and open source/free software in orientation. The indieweb is, then, "an effort to create a web that’s not so dependent on tech giants like Facebook, Twitter, and, yes, Google — a web that belongs not to one individual or one company, but to everyone" (Finley 2013).

Push Notification

This surface, or interactional layer, of the digital is hugely important for providing the foundations through which we interact with digital media (Berry 2011). Under development are new high-speed adaptive algorithmic interfaces (algorithmic GUIs) that can offer contextual information, and even reshape the entire interface itself, through the monitoring of our reactions to computational interfaces and feedback and sensor information from the computational device itself – e.g. Google Now. 

The Notification Layer

One of the key sites for reconciliation of the complexity of real-time streaming computing is the notification layer, which will increasingly by an application programming interface (API) and function much like a platform. This is very much the battle taking place between the “Stacks”, e.g. Google Now, Siri, Facebook Home, Microsoft “tiles”, etc. With the political economy of advertising being transformed with the move from web to mobile, notification layers threaten revenue streams. 
It is also a battle over subjectivity and the kind of subject constructed in these notification systems.

Real-time Data vs Big Data

We have been hearing a lot about “big data” and related data visualisation, methods, and so forth. Big data (exemplified by the NSA Prism programme) is largely a historical batch computing system. A much more difficult challenge is real-time stream processing, e.g. future NSA programmes called SHELLTRUMPET, MOONLIGHTPATH, SPINNERET and GCHQ Tempora programme. 
That is, monitoring in real-time, and being able to computationally spot patterns, undertake stream processing, etc.

Contextual Computing

With multiple sensors built into new mobile devices (e.g. camera, microphones, GPS, compass, gyroscopes, radios, etc.) new forms of real-time processing and aggregation become possible.  In some senses then this algorithmic process is the real-time construction of a person's possible “futures” or their “futurity”, the idea, even, that eventually the curation systems will know “you” better than you know yourself – interesting for notions of ethics/ethos. This the computational real-time imaginary envisaged by corporations, like Google, that want to tell you what you should be doing next...

Anticipatory Computing

Our phones are now smart phones, and as such become media devices that can also be used to identify, monitor and control our actions and behavior  through anticipatory computing. Elements of subjectivity, judgment and cognitive capacities are increasingly delegated to algorithms and prescribed to us through our devices, and there is clearly the danger of a lack of critical reflexivity or even critical thought in this new subject. This new paradigm of anticipatory computing stresses the importance of connecting up multiple technologies to enable a new kind of intelligence within these technical devices. 

Towards a Critical Response to the Post-Digital

Computation in a post-digital age is fundamentally changing the way in which knowledge is created, used, shared and understood, and in doing so changing the relationship between knowledge and freedom. Indeed, following Foucault (1982) the “task of philosophy as a critical analysis of our world is something which is more and more important. Maybe the most certain of all philosophical problems is the problem of the present time, and of what we are, in this very moment… maybe to refuse what we are” (Dreyfus and Rabinow 1982: 216). 

One way of doing this is to think about Critical Encryption Practices, for example, and the way in which technical decisions (e.g. plaintext defaults on email) are made for us. The critique of knowledge also calls for us to question the coding of instrumentalised reason into the computal. This calls for a critique of computational knowledge and as such a critique of the society producing that knowledge. 


Bibliography

Andreessen, M. (2011) Why Software Is Eating The World, Wall Street Journal, August 20 2011, http://online.wsj.com/article/SB10001424053111903480904576512250915629460.html#articleTabs%3Darticle

Bamford, J. (2012) The NSA Is Building the Country’s Biggest Spy Center (Watch What You Say), Wired, accessed 19/03/2012, http://www.wired.com/threatlevel/2012/03/ff_nsadatacenter/all/1

Berry, D. M. (2011) The Philosophy of Software: Code and Mediation in the Digital Age, London: Palgrave Macmillan.

Berry, D. M. (2013) The Minigraph: The Future of the Monograph?, Stunlaw, accessed 29/08/2013, http://stunlaw.blogspot.nl/2013/08/the-minigraph-future-of-monograph.html

Berry, D. M., Dartel, M. v., Dieter, M., Kasprzak, M. Muller, N., O'Reilly, R., and Vicente, J. L (2012) New Aesthetic, New Anxieties, Amsterdam: V2 Press.

Berry, D. M., Dieter, M., Gottlieb, B., and Voropai, L. (2013) Imaginary Museums, Computationality & the New Aesthetic, BWPWAP, Berlin: Transmediale.

Briz, N. (2013) Apple Computers, accessed 29/08/2013, http://nickbriz.com/applecomputers/

Davies, N. (2013) MI5 feared GCHQ went 'too far' over phone and internet monitoring, The Guardian, accessed 22/06/2013, http://www.guardian.co.uk/uk/2013/jun/23/mi5-feared-gchq-went-too-far

Dilger, D.E. (2013) Inside iOS 7: iBeacons enhance apps' location awareness via Bluetooth LE,
AppleInsider, accessed 02/09/2013, http://appleinsider.com/articles/13/06/19/inside-ios-7-ibeacons-enhance-apps-location-awareness-via-bluetooth-le

Emami, G (2012) Bruce Sterling At SXSW 2012: The Best Quotes, The Huffington Post, accessed 29/08/2013, http://www.huffingtonpost.com/2012/03/13/bruce-sterling-sxsw-2012_n_1343353.html

Feldman, S. (2009) Micro-Location Overview: Beyond the Metre...to the Centimetre, Sensors and Systems, accessed 02/09/2013, http://sensorsandsystems.com/article/columns/6526-micro-location-overview-beyond-the-metreto-the-centimetre.html

Finley, K. (2013) Meet the Hackers Who Want to Jailbreak the Internet, Wiredhttp://www.wired.com/wiredenterprise/2013/08/indie-web/

ITU (2012) Measuring the Information Society, accessed 01/01/2013, http://www.itu.int/ITU-D/ict/publications/idi/material/2012/MIS2012-ExecSum-E.pdf

Kalakota, R. (2011) Big Data Infographic and Gartner 2012 Top 10 Strategic Tech Trends, accessed 05/05/2012, http://practicalanalytics.wordpress.com/2011/11/11/big-data-infographic-and-gartner-2012-top-10-strategic-tech-trends

Kosner, A. W. (2013) Why Micro-Location iBeacons May Be Apple's Biggest New Feature For iOS 7, Forbes, accessed 02/09/2013, http://www.forbes.com/sites/anthonykosner/2013/08/29/why-micro-location-ibeacons-may-be-apples-biggest-new-feature-for-ios-7/

Mann, S. (2001) Digital Destiny and Human Possibility in the Age of the Wearable Computer, London: Random House.

Manovich, L. (2013) Software Takes Command, MIT Press.

McCormick, T. (2013) From Monograph to Multigraph: the Distributed Book, LSE Blog: Impact of Social Sciences, accessed 02/09/2013, http://blogs.lse.ac.uk/impactofsocialsciences/2013/01/17/from-monograph-to-multigraph-the-distributed-book/

Personal Cloud (2013) Personal Clouds, accessed 29/08/2013, http://personal-clouds.org/wiki/Main_Page

Poitras, L., Rosenbach, M., Schmid, F., Stark, H. and Stock, J. (2013) How the NSA Targets Germany and Europe, Spiegel, accessed 02/07/2013, http://www.spiegel.de/international/world/secret-documents-nsa-targeted-germany-and-eu-buildings-a-908609.html

Schlueter Langdon, C. 2003. Does IT Matter? An HBR Debate--Letter from Chris Schlueter Langdon. Harvard Business Review (June): 16, accessed 26/08/2013, http://www.ebizstrategy.org/research/HBRLetter/HBRletter.htm and http://www.simoes.com.br/mba/material/ebusiness/ITDOESNTMATTER.pdf

Seppala, T. J. (2013) Broadcom adds WiFi Direct to its embedded device platform, furthers our internet-of-things future, Engadget, accessed 02/09/2013, http://www.engadget.com/2013/08/27/broadcom-wiced-direct/

Stone, B. (2009) Amazon Erases Orwell Books From Kindle, The New York Times, accessed 29/08/2013, http://www.nytimes.com/2009/07/18/technology/companies/18amazon.html?_r=0

26 August 2013

Softwarization: A Tentative Genealogy



I was interested in seeing how the term "softwarization" had been used previously, especially considering I use it in my own work, and was rather surprised to find usage dating back to 1969 (Modern Data 1969). So far this is the earliest usage I have been able to uncover but it is interesting to note how similar the usage of the concept has been by listing a few quotations from the extant literature. By no means meant to be exhaustive it does demonstrate that the notion of "softwarization" has been around almost as long as the concept of software itself.

  • Much more modest in scope, my book present episodes from the history of "softwarization"... of culture between 1960 and 2010, with a particular attention to media software – from the original ideas which led to its development to its current ubiquity (Manovich 2013: 5). 
  • Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. It is a potential that is understood as relating to the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate, often theorised as a form of network politics (Berry 2012). 
  • In my view, this ability to combine previously separate media techniques represents a fundamentally new stage in the history of human media, human semiosis, and human communication, enabled by its “softwarization” (Manovich 2008: 29). 
  • Sure, it would be a very good idea, and if you watch and see what happens in the 21st century you’ll see more and more manufacturers deciding to do precisely that, because of the value of empowered user innovation, which will drive down their costs of making new and better products all the time. Indeed for reasons which are as obvious to manufacturers as they are to us, the softwarization of hardware in the 21st century is good for everybody (Moglen 2004).
  • The history of modern production is intimately tied to the automation of business processes. First, companies used steam engines, then conveyor belts, and today we use information systems, and especially software, to automate business activities. We might call it "softwarization" (Schlueter Langdon 2003). 
  • The lightning-fast development of new software is producing technologies and applications ''that we couldn't even envision 10 years ago,'' [W. Brian] Arthur contended, ''redefining whole industries'' and creating new ones. Virtually every industry will be affected, he says. Just as the industrial revolution uprooted many blue-collar jobs, today's ''softwarization'' will displace many white-collar workers (Arthur, quoted in Pine 1997)
  • We suggest an expression "softwarization" to describe a general trend, in which "software" such as knowledge and services is given a relatively higher appraisal than "hardware" such as goods and resources (Shingikai 1983: 74). 
  • However, because of the increasing "softwarization" of the industry, this is no longer sufficient. "The time is not far," Mr. Jequier said, "when computer usage will become part of the normal school and university curriculum" (Modern Data 1969: 32)


Bibliography

Berry, D. M. (2012) Life in Code and Software/Introduction, Life in Code and Software, accessed 26/08/2013, http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software/Introduction

Manovich, L. (2008) Software Takes Command, draft, accessed 26/08/2013, http://black2.fri.uni-lj.si/humbug/files/doktorat-vaupotic/zotero/storage/D22GEWS3/manovich_softbook_11_20_2008.pdf

Manovich, L. (2013) Software Takes Command, London: Bloomsbury.

Modern Data (1969) International News, Modern Data, Volume 2, Issue 8.

Moglen, E. (2004) Eben Moglen's Harvard Speech - The Transcript, Groklaw, accessed 26/08/2013, http://stevereads.com/cache/eben_moglen_jolt_speech.html

Pine, A. (1997) America's Economic Future? Happy Days Could Be Here Again, accessed 26/08/2013, Los Angeles Times, http://articles.orlandosentinel.com/1997-06-22/news/9706230181_1_economists-inflation-oil-embargo/2

Schlueter Langdon, C. 2003. Does IT Matter? An HBR Debate--Letter from Chris Schlueter Langdon. Harvard Business Review (June): 16, accessed 26/08/2013, http://www.ebizstrategy.org/research/HBRLetter/HBRletter.htm and http://www.simoes.com.br/mba/material/ebusiness/ITDOESNTMATTER.pdf

Shingikai, K. (1983) Japan in the Year 2000: Preparing Japan for an Age of Internationalization, the Aging Society and Maturity, Japan: Japan Times, Limited.


14 August 2013

The Minigraph: The Future of the Monograph?


It has taken digital a lot longer that many had thought to provide a serious challenge to print, but it seems to me that we are now in a new moment in which digital texts enable screen-reading, if it is not an anachronism to still call it that, as a sustained reading practice. Here, I am thinking particularly of the way in which screen technologies, including the high resolution retina displays common on iPhones, kindle e-ink, etc., combined with much more sensitive typesetting design practices in relation to text, are producing long-form texts that are pleasurable to read on a screen-based medium and as ebooks. This has happened most noticeably in magazine articles and longer newspaper features, but is beginning to drift over into well designed reading apps that we find on our mobile devices, such as Pocket and the "Reader" function in Safari. With this change, finally questions are seriously being asked about our writing practices, especially in terms of the assumptions and affordances that are coded into software word-processors, such as Microsoft Word, which assumes, if not enforces, a print medium mentality onto the writing practice. Word wants you to print the documents you write, and this prescriptive behaviour by the software, encourages us to "check" our documents on a "real" paper form before committing to it – even if the final form would have been a digital PDF format. The reason is that even the humble PDF is also designed for printing, as anyone who has tried to read a PDF document on a digital screen will attest, with its clunky and ill-formated structure, that actively fights against a user trying to resize a document to read. But when the reading practices of screen media are sufficient, then many of the assumptions of screen writing can be jettisoned, and with them the most disruptive and unpredictable will be the practice of writing for paper.

For there is little doubt that writing and reading the screen is different from print (see Berry 2012; Gold 2012). These differences are not just found at a technical level, for they also include certain forms of social practice, such as reading in public, passing around documents, sharing ideas and so forth. They also include the kinds of social signalling that digital documents have been very poor at incorporating into their structures, such as the cover, the publisher, the "name", or a striking design or image. Nonetheless, certainly at the present phase of digital texts, I think it is in the typesetting and typography, combined with the social reading practices that take place, such as social sharing, marking, copying/pasting, and commenting, that make digital suddenly a viable way of creating and consuming textual works. In some ways, the social signalling of the cover artwork, etc. has been subsumed into social media such as Facebook and Twitter, but I think that it is a matter of time before this is incorporated into mobile devices in some way when the price of screen technologies, especially an e-ink back cover, can be built for pennies. But to return to the texts themselves, the question of writing, of putting pen to paper, an ironic phrase if ever there was one, is on the cusp of radical change. The long thirty year period of stable writing-software created by the virtual monopoly that Microsoft gained over desktop computers, most noticeably represented by Windows, its desktop operating system, and Office, its productivity suite, is drawing to a close. From its initial introduction in 1983 on the Xenix system as Multi-Tool Word and renamed that year to the familiar Microsoft Word that we all know today (and often hate), print has been the lode star of word-processor design.

The next stage of digital text is unveiling before our eyes, and as it does, many of the textual apparatus of print are migrating to the digital platform, and as they do so the advantages of new search and discovery practices make books extremely visible and usable again, such as through Google Books (Dunleavy 2012). There is still a lot of experimentation in this space and some problems still remain, for example there is currently not a viable alternative to the "chunking" process of reading that print has taught us through pages and page numbering, nor is there a means of book marking that is as convenient as the obviousness of the changing weight of the book as it moves through our hands, or the visual clues afforded through the page volume changing from unread to read as we turn the pages. However, this has been mitigated in some ways by a turning away from very long-form, in terms of book or monograph length texts of around 80,000 words, to the moderate long-form, represented by the 15-40,000 word text which I want to call the minigraph.

By minigraph I am seeking to distinguish a specific length of text and therefore size of book that is able to move beyond the very real limitations of the 6-8,000 word article, and yet is not at such as length that the chunking problem of reading digital texts becomes too much of a problem. In other words, in its current stage of implementation, I think that digital long-form texts are most comfortable to read when they stay within this golden ratio of 15-40,000 words, broken into five or six chapters. The lack of chunking is still a problem, in my opinion, without helpful "page" numbers, and I don't think that paragraph numbering has provided a usable solution to this, but the shortness of the text means that it is readable within a reasonable period of time, creating a de facto chunking at the level of the minigraph chapter (between 2,000 and 5,000 words). Indeed, the introduction of an algorithmic paging system that is device-independent would also be helpful, for example through a notion of "planes" which are analogous to pages but calculated in real-time (see Note 1 below). This would help sidestep the problem of fatigue in digital reading, apparent even in our retina/e-ink screen practices, but also creates works that are long enough to be satisfying to read and can offer both interesting discussion, digression and scholarly apparatus as necessary. Other publishers have already been experimenting with the form, such as Palgrave with its Pivot series, a new e-book format "at 30,000 to 50,000 words, it's longer than a journal article but shorter than a traditional monograph. The Palgrave Pivot, said Hazel Newton, head of digital publishing, 'fills the space in the middle'" (Cassuto 2013). Indeed, Stanford University Press has also started "to release new material in the form of midlength e-books. 'Stanford Briefs' will run 20,000 to 40,000 words in length" which Cassuto (2013) similarly calls the "mini-monograph".

The next step is clearly, how should one write a minigraph, considering the likelihood that Microsoft Word will algorithmically prescribe paper norms, which in academia tend to either 7,000 articles or 70,000 monographs. Here, I think Dieter (2013) is right to make links with the writing practices of Book Sprints as a connecting thread to new forms of publishing (see Hyde 2013). The Book Sprint is a "genre of the ‘flash’ book, written under a short timeframe, to emerge as a contributor to debates, ideas and practices in contemporary culture... interventions that go well beyond a well-written blog-post or tweet, and give some substantive weight to a discussion or issue... within a range of 20-40,000 words" (Berry and Dieter 2012). This rapid and collaborative means of writing is a very creative and intensified form of writing, but it also tends towards the creation of texts that appear to be at an "appropriate" size for the digital medium which makes those writing practices possible in the first place. Book Sprints themselves are usually formed from 4-8 people actively involved in the writing process, and which are facilitated by another non-writing member, and which conveniently maps onto the structure of minigraph chapters discussed earlier. For Dieter, the Book Sprint is conducive to new writing practices, and by extension new reading practices, for network cultures, and therefore "formations that break from subjugation or blockages in pre-existing media and organizational workflows" (Dieter 2013). In this I think he is broadly correct, however, Book Sprints also point toward certain forms of affordance towards textual productions that are conducive to reading and writing in a digital medium, and in the context of this discussion, the word count of a minigraph.

Nick Montfort (2013) has suggested a new, predominantly digital, form of writing that enables different forms of scholarly communication, in his case that of the technical report, which he argues "is as fast as a speeding blog, as detailed and structured as a journal article, and able to be tweeted, discussed, assessed, and used as much as any official publication can be. It is issued entirely without peer review". Montfort, however, connects the technical report to the "grey literature" that is not usually considered part of scholarly publishing as such. Experiments, such as the "pamphlets" issued by the Stanford Media Lab, and which Montford argues are all but technical reports in name, seem to lie at between 10-15,000 words in length, slightly longer than a journal article, and yet a little shorter than a minigraph.

However, a key difference, or at least in the form in which I am considering the minigraph as a viable form of scholarly production, is that neither the Book Sprint nor the technical report are peer-reviewed, although they might be "peer-to-peer reviewed" (see Cebula 2010; Fitzpatrick 2011). Rather, they are rapid production, sharing and collaborative forms of document geared towards social media and intervention or technical documentation. In contrast, the minigraph would share with the other main scholarly outputs, of the journal article and the monograph, the need to be peer-reviewed and production at a high level of textual quality. This is where the minigraph points to new emergent affordances of the digital that enable the kinds of scholarly activity, such as presenting finished work, carefully annotated and referenced, supported and discursively presented, through these new nascent digital textual technologies. That is, that if these intuitions are right about the current state of digital technologies and their affordances for the writing and reading of scholarly work, then the minigraph might be a potential object with the right structure and form for digital scholarship to augment that of the article, review, monograph and so forth. Indeed, the minigraph might offer exactly the right kind of compromise for scholarly work that is called for by, for example, Drucker (2013) and Nardone and Fitzpatrick (2013) and point towards the new possibilities for writing beyond the "article" or the "book" that Robertson (2013) describes as "scholarship" which are institutionally constraining on academic creativity.

In some ways the minigraph seems to be a much less radical suggestion than the multi-modal, all singing and dancing digital object that many have been calling for or are describing. However, the minigraph, as conceptualised here, is actually potentially deeply computational in form, more properly we might describe the minigraph as a code-object. In this sense, the minigraph is able to contain programmable objects itself, in addition to its textual load, opening up many possibilities for interactive dimensions to its use and suggested by the computational document format (CDF) created by Wolfram. The minigraph as described here does not, of course, exist as such, although its form is detectable in, for example, the documents produced by the Quip app, or the dexy format, as "literate documentation", or the Booktype software. It is manifestly not meant to be in the form of Google Docs/Drive, which is essentially traditional word-processing software in the cloud, and which ironically still revolves around a print metaphor. The minigraph is then, a technical imaginary for what digital scholarly writing might be, and which remains to be coded into concrete software and manifested in the practices of scholarly writers and readers. Nonetheless, as a form of long-form text amenable to the mobile practices of readers today, the 15-40,000 word minigraph text could provide a key expressive scholarly form for the digital age.


Notes

[1] The minigraph chunks would be at 250-350 word intervals, roughly pages, and chapters of 2-5,000 words. There is no reason why the term "page" could not be used for these chunks, but perhaps "plane" is more appropriate in terms of chunks representing vertical "cuts" in the text at an appropriate frequency. So "plane 5" would be analogous to page 5, but mathematically calculable to approximately (300 x plane number) to give start word, and ((300 x plane number+1)-1) to give the end word of a particular plane.  This would make the page both algorithmically calculable and therefore device independent, but also suitable for scholarly referencing and produce usable user-friendly numbering throughout the text. As the planes are represented on screen by a digital, the numbering system would immediately be comprehended by existing users of printed texts, and therefore offer a simple transition from paper page based numbering to algorithmic numbering of documents. If the document was printed, the planes could be automatically reformatted to the page size, and hence further make the link between page and plane straightforward for the reader who might never realise the algorithmic source of the numbering system for plane chunks in a minigraph. Indeed, one might place the "plane resolution" within the minigraph text itself, in this case "300", enabling different plane chunks to be used within different texts, and hence changing the way in which a plane is calculated on a book by book basis – very similar to page numbering. One might even have different plane resolutions within chapters in a book enabling different chunks in different chapters or regions. 


Bibliography


Berry, D. M. (2012) Understanding Digital Humanities, London: Palgrave.

Berry, D. M. and Dieter, M. (2012) Book Sprinting, accessed 14/08/2013, http://www.booksprints.net/2012/09/everything-you-wanted-to-know/

Cassuto, L (2013) The Rise of the Mini-Monograph, The Chronicle of Higher Education, accessed 18/08/2013, http://chronicle.com/article/The-Rise-of-the-Mini-Monograph/141007/

Cebula, L. (2010) Peer Review 2.0,  accessed 14/08/2013, North West Historyhttp://northwesthistory.blogspot.co.uk/2010/09/peer-review-20.html

Dieter, M. (2013) Book Sprints, Post-Digital Scholarship and Subjectivation, Hybrid Publishing Lab, accessed 14/08/2013, http://hybridpublishing.org/2013/07/book-sprints-post-digital-scholarship-and-subjectivation/

Dunleavy, P. (2012) Ebooks herald the second coming of books in university social science, LSE Review of Books, 18/08/2013, http://blogs.lse.ac.uk/lsereviewofbooks/2012/05/06/ebooks-herald-the-second-coming-of-books-in-university-social-science/

Drucker, J. (2013) Scholarly Publishing, Amodern, accessed 14/08/2013, http://amodern.net/article/scholarly-publishing-micro-units-and-the-macro-scale/

Fitzpatrick, K. (2011) Planned Obsolescence: Publishing, Technology, and the Future of the Academy, New York University Press.

Gold, M. K. (2012) Debates in the Digital Humanities, University of Minnesota Press.

Hyde, A. (2013) Book Sprints, accessed 14/08/2013, http://www.booksprints.net

Montfort, N. (2013) Beyond the Journal and the Blog, Amodern, accessed 14/08/2013, http://amodern.net/article/beyond-the-journal-and-the-blog-the-technical-report-for-communication-in-the-humanities/

Nardone, M., and Fitzpatrick, K. (2013) We Have Never Done It That Way Before, Amodern, accessed 14/08/2013, http://amodern.net/article/we-have-never-done-it-that-way-before/

Robertson, B. J. (2013) The Grammatization of Scholarship, Amodern, accessed 14/08/2013, http://amodern.net/article/the-grammatization-of-scholarship/







Disqus for Stunlaw: A critical review of politics, arts and technology