08 May 2015

Signal Lab

As part of the Sussex Humanities Lab, at the University of Sussex, we are developing a research group clustered around information theoretic themes of signal/noise, signal transmission, sound theorisation, musicisation, simulation/emulation, materiality, game studies theoretic work, behavioural ideologies and interface criticism. The cluster is grouped under the label Signal Lab and we aim to explore the specific manifestations of the mode of existence of technical objects. This is explicitly a critical and political economic confrontation with computation and computational rationalities.

Signal Lab will focus on techno-epistemological questions around the assembly and re-assembley of past media objects, postdigital media and computational sites. This involves both attending to the impressions of the physical hardware (as a form of techne) and the logical and mathematical intelligence resulting from software (as a form of logos). Hence we aim to undertake an exploration of the technological conditions of the sayable and thinkable in culture and how the inversion of reason as rationality calls for the excavation of how techniques, technologies and computational medias direct human and non-human utterances without reducing techniques to mere apparatuses.

This involves the tracing of the contingent emergence of ideas and knowledge in systems in space and time, to understand distinctions between noise and speech, signal and absence, message and meaning. This includes an examination of the use of technical media to create the exclusion of noise as both a technical and political function and the relative importance of chaos and irregularity within the mathematization of chaos itself. It is also a questioning of the removal of the central position of human subjectivity and the development of a new machine-subject in information and data rich societies of control and their attendant political economies.

Within the context of information theoretic questions, we revisit the old chaos, and the return of the fear of, if not aesthetic captivation toward, a purported contemporary gaping meaninglessness. Often associated with a style of nihilism, a lived cynicism and jaded glamour of emptiness or misanthropy. Particularly in relation to a political aesthetic that desires the liquidation of the subject which in the terms of our theoretic approach, creates not only a regression of consciousness but also the regression to real barbarism. That is, data, signal, mathematical noise, information and computationalism conjure the return of fate and the complicity of myth with nature and a concomitant total immaturity of society and a return to a society in which self-relfection can no longer open its eyes, and in which the subject not only does not exist but instead becomes understood as a cloud of data points, a dividual and a undifferentiated data stream.

Signal Lab will therefore pay attention both to the synchronic and diachronic dimensions of computational totality, taking the concrete meaningful whole and essential elements of computational life and culture. This involves the explanation of the emergence of the present given social forces in terms of some past structures and general tendencies of social change. That is, that within a given totality, there is a process of growing conflict among opposite tendencies and forces which constitutes the internal dynamism of a given system and can partly be examined at the level of behaviour and partly at the level of subjective motivation. This is to examine the critical potentiality of signal in relation to the possibility of social forces and their practices and articulations within a given situation and how they can play their part in contemporary history. This potentially opens the door to new social imaginaries and political possibility for emancipatory politics in a digital age.

24 December 2014


One of the key moments in the composition of the conditions of possibility for a digital abstraction, within which certain logical operations might be combined, performed and arranged to carry out algorithmic computation took place in 1961 when James Buie who was employed by Pacific Semiconductor patented the Transistor Transistor Logic (TTL). This was an all transistor logic for analogue circuitry that crucially standardised the voltage configuration for digital circuitry (0v-5v). This represented a development from the earlier Diode–transistor logic (DTL) which used a diode network and an amplifying function performed by a transistor, and the even earlier Resistor–transistor logic (RTL) based on resistors which handled the input network and bipolar junction transistors (BJTs) as the switching devices. The key to these logic circuits was the creation of a representation of logic functions through the arrangement of the circuitry such that key boolean logic operations could be performed. TTL offered an immediate speed increase as the transition over a diode input is slower than using a transistor. With the creation of the TTL circuitry the logical operations of NAND and NOR allowed the modular construction of a number of boolean operations that themselves served as the components of microprocessor modules, such as the Adder.

I want to explore the importance of signal in relation to the interface between the underlying analogue carrier of the digital circuitry and the logical abstraction of digital computation – that is the maximisation of signal over noise in the creation of a digital signal carrier. It is exactly at this point that the emergence of digital computation is made possible, but also a suggestive link between signal/noise that points to the use of abstraction to minimise noise throughout the design of the digital computer, and which creates a logical universe within which computational thinking, that is signal without noise, or without noise as previously understood as thermal noise, is a constituent of programming practice. This is useful for developing an understanding between notions of materiality in theorising the digital, but also in making explicit the connection between digital "signal" and voltage "signal" or between the possibility of communication of information in a digital system.

At its most basic level standard TTL circuits require a 5-volt power supply which provides the framework within which a binary dichotomy is constructed to represent the true (1) and the false (0). The TTL signal is considered "low", that is "false" or "0", when the voltage is between the values of 0V and 0.8V (with respect to ground) and "high", that is "true" or "1" when the voltage lies between 2.2V and 5V (called VCC to indicate that the top voltage is provided by the power supply, known as the positive supply voltage). Voltage which lies between 0.8V and 2.0V is considered "uncertain" or "illegitimate" and may resolve to either side of the binary division depending on the prior state of the circuitry or be filtered out by the use of additional circuitry. The range of voltages allows for manufacturing tolerances and instabilities of the material carrier, such that noise, uncertainty and glitches can be tolerated. This tripartite division creates the following diagram:

Tripartite division of voltage in TTL digital circuitry

This standardisation of the grammatisation of voltage creates the first and significant "cut" of the analogue world and one which was hugely important historically. By standardising the division of the binary elements of digital computation, in effect, the interoperability of off-the-shelf digital circuits becomes possible, and thus instead of thinking in terms of electrical compatibility, voltage and so forth, the materiality of the binary circuit is abstracted away. This makes possible the design and construction of a number of key circuits which can be combined in innovative ways. It is crucial to recognise that from this point, the actual voltage of the circuits themselves vanishes into the background of computer design as the key issue becomes the creation of combination of logical circuits and the issues of propagation, cross-talk and noise emerge at the different level. In effect, the signal/noise problematic is raised to a new and different level.

18 December 2014

The Sussex Humanities Lab

The Sussex Humanities Lab is a new programme that will seek to position the University of Sussex at the forefront of theoretical and empirical work exploring the purported fundamental re-configuration of the humanities offered by computational technologies. As culture is re-born digital, old divisions that marked out criticism from history, music from paint, image from text, object from performance, have become increasingly problematic - legacies, perhaps, of the mediality of technologies like print and the structures inherited from the medieval university. When cultural production flows through the digital, the boundaries between different media and practices are reconfigured. This new formation calls for us to re-imagine the humanities, and to build fields of study that transcend the computational and the aesthetic, informed by new digital objects of study, rather than by inherited disciplinary approaches. As such, this programme does not merely suggest a newly transplanted digital humanities into the existing disciplinary structures of the university, but rather another digital humanities. One which connects to the theoretical concerns of new media, media studies, critical theory, software studies, digital media, cultural studies and medium theory, whilst continuing to draw on and reconfigure the humanities within a digital milieu. As such, this suggests a turn to critical digital humanities and with it a set of concerns that engage with notions of materiality, medium-specificity, cultural critique, computation, networks, archives, performance, practices, and new computational cultures.

Directors: Caroline Bassett (PI), David M. Berry (Co-I), Sally Jane Norman (Co-I), Tim Hitchcock (Co-I), and Rachel Thomson (Co-I).

/// Launching Sept 2015 ///

13 November 2014

Flat Theory

The world is flat.[1] Or perhaps better, the world is increasingly "layers". Certainly the augmediated imaginaries of the major technology companies are now structured around a post-retina notion of mediation made possible and informed by the digital transformations ushered in by mobile technologies that provide a sense of place, as well as a sense of management of complex real-time streams of information and data.

Two new competing computational interface paradigms are now deployed in the latest version of Apple and Google's operating systems, but more notably as regulatory structures to guide the design and strategy related to corporate policy. The first is "flat design" which has been introduced by Apple through iOS 8 and OS X Yosemite as a refresh of the ageing operating systems' human computer interface guidelines, essentially stripping the operating system of historical baggage related to techniques of design that disguised the limitations of a previous generation of technology, both in terms of screen but also processor capacity. It is important to note, however, that Apple avoids talking about "flat design" as its design methodology, preferring to talk through its platforms specificity, that is about iOS' design or OS X's design. The second is "material design" which was introduced by Google into its Android L, now Lollipop, operating system and which also sought to bring some sense of coherence to a multiplicity of Android devices, interfaces, OEMs and design strategies. More generally “flat design” is "the term given to the style of design in which elements lose any type of stylistic characters that make them appear as though they lift off the page" (Turner 2014). As Apple argues, one should “reconsider visual indicators of physicality and realism” and think of the user interface as "play[ing] a supporting role", that is that techniques of mediation through the user interface should aim to provide a new kind of computational realism that presents "content" as ontologically prior to, or separate from its container in the interface (Apple 2014). This is in contrast to “rich design,” which has been described as "adding design ornaments such as bevels, reflections, drop shadows, and gradients" (Turner 2014).

I want to explore these two main paradigms – and to a lesser extent the flat-design methodology represented in Windows 7/8 and the, since renamed, Metro interface (now Microsoft Modern UI) – through a notion of a comprehensive attempt by both Apple and Google to produce a rich and diverse umwelt, or ecology, linked through what what Apple calls "aesthetic integrity" (Apple 2014). This is both a response to their growing landscape of devices, platforms, systems, apps and policies, but also to provide some sense of operational strategy in relation to computational imaginaries. Essentially, both approaches share an axiomatic approach to conceptualising the building of a system of thought, in other words, a primitivist predisposition which draws from both a neo-Euclidian model of geons (for Apple), but also a notion of intrinsic value or neo-materialist formulations of essential characteristics (for Google). That is, they encapsulate a version of what I am calling here flat theory. Both of these companies are trying to deal with the problematic of multiplicities in computation, and the requirement that multiple data streams, notifications and practices have to be combined and managed within the limited geography of the screen. In other words, both approaches attempt to create what we might call aggregate interfaces by combining techniques of layout, montage and collage onto computational surfaces (Berry 2014: 70).

The "flat turn" has not happened in a vacuum, however, and is the result of a new generation of computational hardware, smart silicon design and retina screen technologies. This was driven in large part by the mobile device revolution which has not only transformed the taken-for-granted assumptions of historical computer interface design paradigms (e.g. WIMP) but also the subject position of the user, particularly structured through the Xerox/Apple notion of single-click functional design of the interface. Indeed, one of the striking features of the new paradigm of flat design, is that it is a design philosophy about multiplicity and multi-event. The flat turn is therefore about modulation, not about enclosure, as such, indeed it is a truly processual form that constantly shifts and changes, and in many ways acts as a signpost for the future interfaces of real-time algorithmic and adaptive surfaces and experiences. The structure of control for the flat design interfaces is following that of the control society, is "short-term and [with] rapid rates of turnover, but also continuous and without limit" (Deleuze 1992). To paraphrase Deleuze: Humans are no longer in enclosures, certainly, but everywhere humans are in layers.

Apple uses a series of concepts to link its notion of flat design which include, aesthetic integrity, consistency, direct manipulation, feedback, metaphors, and user control (Apple 2014). Reinforcing the haptic experience of this new flat user interface has been described as building on the experience of "touching glass" to develop the "first post-Retina (Display) UI (user interface)" (Cava 2013). This is the notion of layered transparency, or better, layers of glass upon which the interface elements are painted through a logical internal structure of Z-axis layers. This laminate structure enables meaning to be conveyed through the organisation of the Z-axis, both in terms of content, but also to place it within a process or the user interface system itself.

Google, similarly, has reorganised it computational imaginary around a flattened layered paradigm of representation through the notion of material design. Matias Duarte, Google's Vice President of Design and a Chilean computer interface designer, declared that this approach uses the notion that it “is a sufficiently advanced form of paper as to be indistinguishable from magic” (Bohn 2014). But magic which has constraints and affordances built into it, "if there were no constraints, it’s not design — it’s art" Google claims (see Interactive Material Design) (Bohn 2014). Indeed, Google argues that the "material metaphor is the unifying theory of a rationalized space and a system of motion", further arguing:
The fundamentals of light, surface, and movement are key to conveying how objects move, interact, and exist in space and in relation to each other. Realistic lighting shows seams, divides space, and indicates moving parts... Motion respects and reinforces the user as the prime mover... [and together] They create hierarchy, meaning, and focus (Google 2014). 
This notion of materiality is a weird materiality in as much as Google "steadfastly refuse to name the new fictional material, a decision that simultaneously gives them more flexibility and adds a level of metaphysical mysticism to the substance. That’s also important because while this material follows some physical rules, it doesn’t create the "trap" of skeuomorphism. The material isn’t a one-to-one imitation of physical paper, but instead it’s 'magical'" (Bohn 2014). Google emphasises this connection, arguing that "in material design, every pixel drawn by an application resides on a sheet of paper. Paper has a flat background color and can be sized to serve a variety of purposes. A typical layout is composed of multiple sheets of paper" (Google Layout, 2014). The stress on material affordances, paper for Google and glass for Apple are crucial to understanding their respective stances in relation to flat design philosophy.[2]
Glass (Apple): Translucency, transparency, opaqueness, limpidity and pellucidity. 
Paper (Google): Opaque, cards, slides, surfaces, tangibility, texture, lighted, casting shadows. 
Paradigmatic Substances for Materiality

In contrast to the layers of glass that inform the logics of transparency, opaqueness and translucency of Apple's flat design, Google uses the notion of remediated "paper" as a digital material, that is this "material environment is a 3D space, which means all objects have x, y, and z dimensions. The z-axis is perpendicularly aligned to the plane of the display, with the positive z-axis extending towards the viewer. Every sheet of material occupies a single position along the z-axis and has a standard 1dp thickness" (Google 2014). One might think then of Apple as painting on layers of glass, and Google as thin paper objects (material) placed upon background paper. However a key difference lies in the use of light and shadow in Google's notion which enables the light source, located in a similar position to the user of the interface, to cast shadows of the material objects onto the objects and sheets of paper that lie beneath them (see Jitkoff 2014). Nonetheless, a laminate structure is key to the representational grammar that constitutes both of these platforms.

Armin Hofmann, head of the graphic design department at the Schule für Gestaltung Basel (Basel School of Design) and was instrumental in developing the graphic design style known as  the Swiss Style. Designs from 1958 and 1959. 

Interestingly, both design strategies emerge from an engagement with and reconfiguration of the principles of design that draw from the Swiss style (sometimes called the International Typographic Style) in design (Ashghar 2014, Turner 2014).[3] This approach emerged in the 1940s, and
mainly focused on the use of grids, sans-serif typography, and clean hierarchy of content and layout. During the 40’s and 50’s, Swiss design often included a combination of a very large photograph with simple and minimal typography (Turner 2014).
The design grammar of the Swiss style has been combined with minimalism and the principle of "responsive design", that is that the materiality and specificity of the device should be responsive to the interface and context being displayed. Minimalism is a "term used in the 20th century, in particular from the 1960s, to describe a style characterized by an impersonal austerity, plain geometric configurations and industrially processed materials" (MoMA 2014). Robert Morris, one of the principle artists of Minimalism, and author of the influential Notes on Sculpture used "simple, regular and irregular polyhedrons. Influenced by theories in psychology and phenomenology" which he argued "established in the mind of the beholder ‘strong gestalt sensation’, whereby form and shape could be grasped intuitively" (MoMA 2014).[4]

Robert Morris: Untitled (Scatter Piece), 1968-69, felt, steel, lead, zinc, copper, aluminum, brass, dimensions variable; at Leo Castelli Gallery, New York. Photo Genevieve Hanson. All works this article © 2010 Robert Morris/Artists Rights Society (ARS), New York.
The implications of these two competing world-views are far-reaching in that much of the worlds initial contact, or touch points, for data services, real-time streams and computational power is increasingly through the platforms controlled by these two companies. However, they are also deeply influential across the programming industries, and we see alternatives and multiple reconfigurations in relation to the challenge raised by the "flattened" design paradigms. That is, they both represent, if only in potentia, a situation of a power relation and through this an ideological veneer on computation more generally. Further, with the proliferation of computational devices – and the screenic imaginary associated with them in the contemporary computational condition – there appears a new logic which lies behind, justifies and legitimates these design methodologies.

It seems to me that these new flat design philosophies, in the broad sense, produce an order in precepts and concepts in order to give meaning and purpose not only in the interactions with computational platforms, but also more widely in terms of everyday life. Flat design and material design are competing philosophies that offer alternative patterns of both creation and interpretation, which are meant to have not only interface design implications, but more broadly in the ordering of concepts and ideas, the practices and the experience of computational technologies broadly conceived. Another way to put this could be to think about these moves as being a computational founding, the generation of, or argument for, an axial framework for building, reconfiguration and preservation.

Indeed, flat design provides and more importantly serves, as a translational or metaphorical heuristic for both re-presenting the computational, but also teaches consumers and users how to use and manipulate new complex computational systems and stacks. In other words, in a striking visual technique flat design communicates the vertical structure of the computational stack, on which the Stack corporations are themselves constituted. But also begins to move beyond the specificity of the device as privileged site of a computational interface interaction from beginning to end. For example, interface techniques are abstracted away from the specificity of the device, for example through Apple’s “handoff” continuity framework which also potentially changes reading and writing practices in interesting ways.

These new interface paradigms, introduced by the flat turn, have very interesting possibilities for the application of interface criticism, through unpacking and exploring the major trends and practices of the Stacks, that is, the major technology companies. I think that further than this, the notion of layers are instrumental in mediating the experience of an increasingly algorithmic society (e.g. think dashboards, personal information systems, quantified self, etc.), and as such provide an interpretative frame for a world of computational patterns but also a constituting grammar for building these systems in the first place. There is an element in which the notion of the postdigital may also be a useful way into thinking about the question of the link between art, computation and design given here (see Berry and Dieter, forthcoming) but also the importance of notions of materiality for the conceptualisation deployed by designers working within both the flat design and material design paradigms – whether of paper, glass, or some other "material" substance.[5]


[1] Many thanks to Michael Dieter and Søren Pold for the discussion which inspired this post. 
[2] The choice of paper and glass as the founding metaphors for the flat design philosophies of Google and Apple raise interesting questions for the way in which these companies articulate the remediation of other media forms, such as books, magazines, newspapers, music, television and film, etc. Indeed, the very idea of "publication" and the material carrier for the notion of publication is informed by the materiality, even if only a notional affordance given by this conceptualisation. It would be interesting to see how the book is remediated through each of the design philosophies that inform both companies, for example. 
[3] One is struck by the posters produced in the Swiss style which date to the 1950s and 60s but which today remind one of the mobile device screens of the 21st Century. 
[4] There is also some interesting links to be explored between the Superflat style and postmodern art movement, founded by the artist Takashi Murakami, which is influenced by manga and anime, both in terms of the aesthetic but also in relation to the cultural moment in which "flatness" is linked to "shallow emptiness".
[5] There is some interesting work to be done in thinking about the non-visual aspects of flat theory, such as the increasing use of APIs, such as the RESTful api, but also sound interfaces that use "flat" sound to indicate spatiality in terms of interface or interaction design.  


Apple (2014) iOS Human Interface Guidelines, accessed 13/11/2014, https://developer.apple.com/library/ios/documentation/userexperience/conceptual/mobilehig/Navigation.html

Ashghar, T. (2014) The True History Of Flat Design, accessed 13/11/2014, http://www.webdesignai.com/flat-design-history/

Berry, D. M. (2014) Critical Theory and the Digital, New York: Bloomsbury.

Berry, D. M. and Dieter, M. (forthcoming) Postdigital Aesthetics: Art, Computation and Design, Basingstoke: Palgrave Macmillan.

Bohn, D. (2014) Material world: how Google discovered what software is made of, The Verge, accessed 13/11/2014, http://www.theverge.com/2014/6/27/5849272/material-world-how-google-discovered-what-software-is-made-of

Cava, M. D. (2013) Jony Ive: The man behind Apple's magic curtain, USA Today, accessed 1/1/2014, http://www.usatoday.com/story/tech/2013/09/19/apple-jony-ive-craig-federighi/2834575/

Deleuze, G. (1992) Postscript on the Societies of Control, October, vol. 59: 3-7.

Google (2014) Material Design, accessed 13/11/2014, http://www.google.com/design/spec/material-design/introduction.html

Google Layout (2014) Principles, Google, accessed 13/11/2014, http://www.google.com/design/spec/layout/principles.html

Jitkoff, N. (2014) This is Material Design, Google Developers Blog, accessed 13/11/2014,  http://googledevelopers.blogspot.de/2014/06/this-is-material-design.html

MoMA (2014) Minimalism, MoMA, accessed 13/11/2014, http://www.moma.org/collection/details.php?theme_id=10459

Turner, A. L. (2014) The history of flat design: How efficiency and minimalism turned the digital world flat, The Next Web, accessed 13/11/2014, http://thenextweb.com/dd/2014/03/19/history-flat-design-efficiency-minimalism-made-digital-world-flat/

22 October 2014

Interview with David M. Berry at re:publica 2013

Open science interview at re:publica conference in Berlin, 2013, by Kaja Scheliga.

Kaja ScheligaSo to start off...what is your field, what do you do?

David M. Berry: My field is broadly conceived as digital humanities or software studies. I focus in particular on critical approaches to understanding technology, through theoretical and philosophical work, so, for example, I have written a book called Philosophy of Software and I have a new book called Critical Theory and The Digital but I am also interested in the multiplicity of practices within computational culture as well, and the way the digital plays out in a political economic context.

KS: Today, here at the re:publica you talked about digital humanities. What do you associate with the term open science?

DB: Well, open science has very large resonances with Isaiah Berlin’s notion of the open society, and I think the notion of open itself is interesting in that kind of construction, because it implies a "good". To talk about open science implies firstly that closed science is "bad", that science should be somehow widely available, that everything is published and there is essentially a public involvement in science. It has a lot of resonances, not necessarily clear. It is a cloudy concept. 

KS: So where do you see the boundary between open science and digital humanities? Do they overlap or are they two separate fields? Is one part of the other?

DB: Yes, I think, as I was talking in the previous talk about how digital humanities should be understood within a constellation, I think open science should also be understood in that way. There is no single concept as such, and we can bring up a lot of different definitions, and practitioners would use it in multiple ways depending on their fields. But I think, there is a kind of commitment towards open access, the notion of some kind of responsibility to a public, the idea that you can have access to data and to methodologies, and that it is published in a format that other people have access to, and also there is a certain democratic value that is implicit in all of these constructions of the open: open society, open access, open science, etc. And that is really linked to a notion of a kind of liberalism that the public has a right, and indeed has a need to understand.  And to understand in order to be the kind of citizen that can make decisions themselves about science. So in many ways it is a legitimate discourse, it is a linked and legitimating discourse about science itself, and it is a way of presenting science as having a value to society.

KS:  But is that justified, do you agree with this concept? Or do you rather look at it critically?

DB: Well, I am a critical theorist. So, for me these kinds of concepts are never finished. They always have within them embedded certain kinds of values and certain kinds of positions. And so for me it is an interesting concept and I think "open science" is interesting in that it emerges at a certain historical juncture, and of course with the notion of a "digital age" and all the things that have been talked about here at the re:publica, everyone is so happy and so progressive and the future looks so bright – apparently...

KS: Does it?

DB: Yes, well, from the conference perspective, because re:publica is a technology conference, there is this whole discourse of progress – which is kind of an American techno-utopian vision that is really odd in a European context – for me anyway. So, being a critical theorist, it does not necessarily mean that I want to dismiss the concept, but I think it is interesting to unpick the concept and see how it plays out in various ways. In some ways it can be very good, it can be very productive, it can be very democratic, in other ways it can be used for example as a certain legitimating tool to get funding for certain kinds of projects, which means other projects, which are labelled "closed", are no longer able to get funded. So, it is a complex concept, it is necessarily "good" or "bad".

KS: So, not saying ‘good’ or ‘bad’, but looking at the dark side of say openness, where do you see the limits? Or where do you see problem zones?

DB: Well, again, to talk about the "dark side," it is kind of like Star Wars or something. We have to be very careful with that framework, because the moment you start talking about the dark side of the digital, which is a current, big discussion going on, for example, in the dark side of the digital humanities, I think it is a bit problematic. That is why thinking in terms of critique is a much better way to move forward. So for me, what would be more interesting would be to look at the actual practices of how open science is used and deployed. Which practitioners are using it? Which groups align themselves with it? Which policy documents? And which government policies are justified by rolling back to open science itself? And then, it is important to perform a kind of genealogy of the concept of "open science" itself. Where does it come from? What is it borrowing from? Where is the discussion over that term? Why did we come to this term being utilised in this way? And I think that then shows us the force of a particular term, and places it within an historical context. Because open science ten years ago may have meant one thing, but open science today might mean something different. So, it is very important we ask these questions.

KS: All right. And are there any open science projects that come to mind, spontaneously, right now?

DB: I'm not sure they would brand themselves as "open science" but I think CERN would be for me a massive open science project, and which likes to promote itself in these kinds of ways. So, the idea of a public good, publishing their data, having a lot of cool things on their website the public can look at, but ultimately, that justification for open science is disconnected because, well, what is the point of finding the Higgs Boson, what is the actual point, where will it go, what will it do? And that question never gets asked because it is open science, so the good of open science makes it hard for us to ask these other kinds of questions. So, those are the kinds of issues that I think are really important. And it is also interesting in terms of, for example, there was an American version of CERN which was cancelled. So why was CERN built, how did open science enable that? I mean, we are talking huge amounts of money, large amounts of effort, would this money have been better transferred to solving the problem of unemployment, you know, we are in a fiscal crisis at the moment, a financial catastrophe and these kinds of questions get lost because open science itself gets divorced from its political economic context.

KS: Yes. But interesting that you say that within open science certain questions are maybe not that welcome, so actually, it seems to be at certain places still pretty closed, right?

DB: Well, that is right, open itself is a way of closing down other kinds of debates. So, for example, in the programming world open source was promoted in order not to have a discussion about free software, because free software was just too politicised for many people. So using the term open, it was a nice woolly term that meant everything to a lot of different people, did not feel political and therefore could be promoted to certain actors, many governments, but also corporations. And people sign up to open source because it just sounds – "open source, yes, who is not for open source?" I think if you were to ask anyone here you would struggle to find anybody against open source. But if you ask them if they are for free software a lot of people would not know what it is. That concept has been pushed away. I think the same thing happens in science by these kinds of legitimating discourses. Certain kinds of critical approaches get closed down. I think you would not be welcomed if at the CERN press conference for the Higgs boson you would put up your hand and ask: "well actually, would it not have been better spending this money on solving poverty?" That would immediately not be welcomed as a legitimate line of questioning.  

KS: Yes, right. Okay, so do you think science is already open, or do we need my openness? And if so, where?

DB: Well, again, that is a strange question that assumes that I know what "open" is. I mean openness is a concept that changes over time. I think that the project of science clearly benefits from its ability to be critiqued and checked, and I do not necessarily just want to have a Popperian notion of science here – it is not just about falsification – but I think verification and the ability to check numbers is hugely important to the progress of science. So that dimension is a traditional value of science, and very important that it does not get lost. Whether or not rebranding it as open science helps us is not so straightforward. I am not sure that this concept does much for us, really. Surely it is just science? And approaches that are defined as "closed" are perhaps being defined as non-science.

KS: What has the internet changed about science and working in research?

DB: Well, I am not a scientist, so –   

KS: - as in science, as in academia. Or, what has the internet changed in research?

DB: Well, this is an interesting question. Without being too philosophical about it I hope, Heidegger was talking about the fact that science was not science anymore, and actually technology had massively altered what science was. Because science now is about using mechanisms, tools, digital devices, and computers, in order to undertake the kinds of science that is possible. So it becomes this entirely technologically driven activity. Also, today science has become much more firmly located within economic discourse, so science needs to be justified in terms of economic output, for example. It is not just the internet and the digital that have introduced this, there are larger structural conditions that I think that are part of this. So, what has the Internet or the web changed about science? One thing is allowing certain kinds of scientism to be performed in public. And so you see this playing out in particular ways, certain movements – really strange movements – have emerged that are pro-science and they just seek to attack people they see as anti-science. So, for example, the polemical atheist movement led by Richard Dawkins argues that that it is pro-science and anyone who is against it is literally against science – they are anti-science. This is a very strange way of conceptualising science. And some scientists I think are very uncomfortable with the way Dawkins is using rhetoric, not science, to actually enforce and justify his arguments. And another example is the "skeptics" movement, another very "pro-science" movement that has very fixed ideas about what science is. So science becomes a very strong, almost political philosophy, a scientism. I am interested in exploring how digital technologies facilitate a technocratic way of thinking: a certain kind of instrumental rationality, as it were.

KS: How open is your research, how open is your work? Do you share your work in progress with your colleagues?

DB: Well, as an academic, sharing knowledge is a natural way of working – we are very collaborative, go to conferences, present new work all the time, and publish in a variety of different venues. In any case, your ability to be promoted as an academic, to become a professor, is based on publishing, which means putting work out there in the public sphere which is then assessed by your colleagues. So the very principles of academia are about publishing, peer review, and so on and so forth. So, we just have to be a bit careful about the framing of the question in terms of: "how 'open' is your work?", because I am not sure how useful that question is inasmuch as it is too embedded within certain kinds of rhetorics that I am a little bit uncomfortable with. So the academic pursuit is very much about sharing knowledge – but also knowledge being shared.

KS: Okay. I was referring to, of course, when you do work and when you have completed your research you want to share it with others because that is the point of doing the research in the first place, to find something out and then to tell the world look this is what I found out, right?

DB: Possibly. No.

KS: No?

DB: This is what I am saying. I mean –

KS: I mean, of course in a simplified way.

DB: Well, disciplines are not there to "tell the world". Disciplines are there to do research and to create research cultures. What is the point of telling the world? The world is not necessarily very interested. And so you have multiple publics – which is one way of thinking about it. So one of my publics, if you like, is my discipline, and cognate disciplines, and then broader publics like re:publica and then maybe the general public. And there are different ways of engaging with those different audiences. If I was a theoretical physicist for example, and I publish in complex mathematical formulae,  I can put that on the web but you are not really going to get an engagement from a public as such. That will need to be translated. And therefore maybe you might write a newspaper article which translates that research for a different public. So, I think it is not about just throwing stuff on the web or what have you. I think that would be overly simplistic. It is also about translation. So do I translate my research? Well I am doing it now. I do it all the time. So, I talk to Ph.D. students and graduates, that is part of the dissemination of information, which is, I think really what you are getting at. How do you disseminate knowledge?

KS: Exactly. And knowledge referring not only to knowledge that is kind of settled and finished, you know, I have come to this conclusion, this is what I am sharing, but also knowledge that is in the making, in the process, that was what I was referring to.

DB: Sure, yes. I mean, good academics do this all the time. And I am talking particularly about academia here. I think good academics do research and then they are teaching and of course these two things overlap in very interesting ways. So if you are very lucky to have a good scholar as a professor you are going to benefit from seeing knowledge in the making. So that is a more general question about academic knowledge and education. But the question of knowledges for publics, I think that is a different question and it is very, very complex and you need to pin down what it is you want to happen there. In Britain we have this notion of the public engagement of science and that is about translation. Let’s say you do a big research project that is very esoteric or difficult to understand, and then you write a popular version of it – Stephen Hawking is a good example of this – he writes books that people can read and this has major effects beyond science and academia itself. I think this is hugely important, both in terms of understanding how science is translated, but also how popular versions of science may not themselves be science per se.

KS: So, what online tools do use for your research?

DB: What online tools? I do not use many online tools as such. I mean I am in many ways quite a traditional scholar, I rely on books – I will just show you my notes. I take notes in a paper journal and I write with a fountain pen which I think is a very traditional way of working. The point is that my "tools" are non-digital, I hardly ever digitise my notes and I think it is interesting to go through the medium of paper to think about the digital, because digital tools seem to offer us solutions and we are very caught up in the idea that the digital provides answers. I think we have to pause a little bit, and paper forces you to slow down – that is why I like it. It is this slowing down that I think is really important when undertaking research, giving time to think by virtue of making knowledge embodied. Obviously, when it comes to collecting data and following debates I will use digital tools. Google of course is one of the most important, Google scholar and social media are really interesting tools, Gephi is very interesting social network analysis tool. I use Word and Excel as does pretty much everybody else. So the important issue is choosing which digital tools to use in which contexts. One thing I do much less of is, for example, the kind of programming were people write APIs and scrapers and all this kind of approaches, I have been involved in some projects doing that but I just do not have time to construct those tools, so I sometimes other people’s software (such as digital methods tools).

Notes, reproduced in Lewandowska and Ptak (2013)

KS: Okay, and how about organising ideas, do you do that on paper? Or for example do you use a tool for task managing?

DB: Always paper. If you have a look in my journal you can see that I can choose any page and there is an organisation of ideas going on here. For me it is a richer way to work through ideas and concepts  Eventually, you do have to move to another medium – you know I do not type my books on typewriters! – I use a word processor, for example. So eventually I do work on a computer, but by that point I think the structure is pretty much in my head but mediated through paper and ink – the computer is therefore an inscription device at the end of thinking. I dwell on paper, as it were, and then move over into a digital medium. You know, I do not use any concept mapping softwares, I just find them too clumsy and too annoying actually. 

KS: Okay, so what puts you off not using / not being tempted by using all those tools that offer you help and offer to make you more productive?

DB: Well, because firstly, I do not want to be more productive, and secondly I do not think they help. So the first thing I tell my new students, including new Ph.D. students, is: buy a note book and a pen and start taking notes. Do not think that the computer is your tool, or your servant. The computer will be your hindrance, particularly in the early stages of a Ph.D. It is much more important to carefully review and think through things. And that is actually the hardest thing to do, especially in this world of tweets and messages and emails – distractions are everywhere. There are no tweets in my book, thankfully, and it is the slowness and leisureliness that enables me to create a space for thinking. It is a good way of training your mind to pause and think before responding.

KS: So, you are saying that online tools kind of distract us from thinking and actually we think that we are doing a lot of stuff but actually we are not doing that much, right?

DB: Well, the classic problem is students that, for example, think they are doing an entirely new research project and map it all out in a digital tool that allows you to do fancy graphs, etc. – but they are not asking any kind of interesting research questions because they have not actually looked at the literature and they do not know the history of their subject. So it is very important that we do this, indeed some theorists have made the argument that we are forgetting our histories. And I think this is very true. The temptation to be in the future, to catch the latest wave or the latest trend affects Ph.D. students and academics as much as everybody else. And there are great dangers from chasing those kinds of solutions. Academia used to be about taking your time and being slow and considering things. And I think in the digital age academia’s value is that it can continue to do that, at least I hope so.

KS: Okay, but is there not a danger that if you say: okay, I am taking my time, I am taking my paper and my pen while others are hacking away, being busy using all those online tools, and in a way you could say okay that speeds up some part of research, at least when you draw out the cumulative essence of it, can you afford to invest the time?

DB: Well, it is not either or. It is both. The trouble is, I find anyway, with Ph.D. students, their rush to use the digital tools is to prevent them from having to use the paper. And, a classic example of this is Endnote. Everybody rushes to use Endnote because they do not like doing bibliographies. But actually, doing the bibliography by hand is one of the best things you can do because you learn your field’s knowledge, and you immediately recognise names because you are the one typing them in. Again this is a question of embodiment. When you leave that to a computer program to do it for you, laziness emerges – and you just pick and choose names to scatter over your paper. So, I am not saying you should not use such tools, I am saying that you should maybe do both. I mean, I never use these tools to construct bibliographies, I do them by hand because it encourages me to think through, what about this person are they really contributing, what do they add? And I think that is really important.

KS: Although, it probably should be more about, okay what do I remember this persons writing, and what have they contributed and not so much about whose name sounds fancy and which names do I need to drop here.

DB: Totally. Well, there has been some interesting work on this. Researchers have undertaken bibliometric analysis to show how references are used in certain disciplines and how common citations crop up again and again because they were used in previous papers and researchers feel the need to mention them again – so it becomes a name-checking exercise. Interestingly, few people go back and read these original canonical papers. So it is really important to read early work in a field, and place it within an historical context and trajectory, if one is to make sense of the present.

KS: A last question, I want to ask you about collaborative writing, do you write with other people and if so, how does that work? Where do you see advantages and where do you see possible trouble?

DB: Yes, I do. I have been through the whole gamut of collaborative writing, so I have seen both the failures and the successes. Collaborative writing is never easy, first and foremost. Particularly I think for humanities' academics, because we are taught and we are promoted on the basis of our name being on the front of a paper or on the cover of a book. This obviously adds its own complications, plus you know academics tend to be very individualistic, and there is always questions about -

KS: …in spite of all the collaboration, right?

DB: Indeed, yes of course, I mean that is just the academic way, but I think you need that, because writing a book requires you to sit in a room for months and months and months and the sun is shining, everyone else having fun and you are sitting there in a gloomy room typing away, so you need that kind of self-drive and belief, and that, of course, causes frictions between people. So I have tried various different methods of working with people, but one method I found particularly interesting is a method called booksprinting. It is essentially a time-boxed process where you come together with, let us say, four or five other scholars, you are locked in a room for the week (figuratively speaking!), except to sleep and you eat together, write together, concept map and develop a book, collaboratively. And then the book that is produced is jointly authored, there is no arguments over that, if you do not agree you can leave, but the point is that the collaborative output is understood and bought into by all the participants. Now, to many academics this sounds like absolute horror, and indeed when I was first asked if I would like to be involved I was sceptical – I went along but I was sure this was going to be a complete failure. However it was one of the most interesting collaborative writing processes I have been involved in. I have taken part in two book sprints to date (three including 2014). You are welcome to have a look at the first book, it is called New Aesthetic New Anxieties. It is amazing how productive those kinds of collaborative writing processes can be. But it has to be a managed process. So, do check out booksprinting, it is very interesting – see also Imaginary museums, Computationality & the New Aesthetic and On Book Sprints.

KS: Okay, but then for that to work what do you actually / from your experience, can you draw out factors that make it work?

DB: Sure. The most important factor is having a facilitator, so someone who does not write. And the facilitators role is to make sure that everybody else does write.  And that is an amazing ability, a key person, because they have to manage difficult people and situations – it is like herding cats. Academics do not like to be pushed, for example. And the facilitator I have worked with, he is very skilled at this kind of facilitation. The second thing is the kinds of writing that you do and how you do it. The booksprinting process I have been involved in has been very paper-based, so again there is a lot of paper everywhere, there are post-it notes, there is a lot of sharing of knowledge, and this is probably the bit you are going to find interesting: There is, nonetheless, a digital tool which enables you to write collaboratively. It is a cleverly written tool, it has none of the bells and whistles, it is very utilitarian and really focuses the writing process and working together. And, having seen this used out on two different booksprints, I can affirm that it does indeed help the writing process. I recommend you have a look.

KS: So, what is the tool?

DB: It is called Booktype. And Adam Hyde is the facilitator who developed the process of Book Sprints, and is also one of the developers of the software.

KS: Okay, interesting. Any questions? Or any question I did not ask you, anything you want to add that we have missed out, any final thoughts? Any questions for me?

DB: Yes, I do think that a genealogy of "open science" is important and your questions are really interesting because they are informed by certain assumptions about what open science is. In other words, there is a certain position you are taking which you do not make explicit, and which I find interesting. So it might be useful to reflect on how "open science" needs to critically unpacked further.

KS: Okay, great, thank you very much.

DB: My pleasure.

KS: Thanks.

DB: Thank you.

Interview archived at Zenodo. Transcript corrected from the original to remove errors and clarify terms and sentences. 

Disqus for Stunlaw: A critical review of politics, arts and technology