31 March 2012

Computational Thinking: Some thoughts about Abduction

One of the striking features of computation is the extent to which forms of pattern matching are required in computer processing. Pattern recognition can be described as a means of identifying repeated shapes or structures which are features of a system under investigation. Whilst we tend to think of patterns as visual, of course they can also be conceptual, iterative, representational, logical, mathematical, etc. in form providing the underlying computational system can be programmed to recognise the distinctive shape of the pattern from the data. They can also consist of meta-patterns as described by Gregory Bateson as patterns that can detected across different spheres, such as culture, humanities, science and the social or 'the pattern that connects' (see Bateson 1979; Dixon 2012). The recognition of patterns and uncovering their relationships in sets of data was called 'abductive reasoning' by Charles Peirce, who contrasted it with inductive and deductive reasoning. Indeed, Peirce described abduction as a kind of logical inference akin to guessing. This he called the leap of abduction where by one could abduce A from B if A is sufficient (or nearly sufficient) but not necessary for B. The possible uses of this within a computational context should be fairly obvious, especially when software is handling partial, fuzzy or incomplete data and needs to generate future probabilistic decision points, or recognise important features or contours in a data set.

Charles Sanders Peirce (1839–1914) argued that pattern matching, which he called abduction or retroduction (he also used the terms presumption or hypothesis), was a type of hypothesis formation. The crucial function of 'a pattern of abduction … consists in its function as a search strategy which leads us, for a given kind of scenario, in a reasonable time to a most promising explanatory conjecture which is then subject to further test' (Schurz 2008, 205). Peirce argued,
Abduction is the process of forming an explanatory hypothesis.  It is the only logical operation which introduces any new idea; for induction does nothing but determine a value, and deduction merely evolves the necessary consequences of a pure hypothesis. Deduction proves that something must be; Induction shows that something actually is operative; Abduction merely suggests that something may be (Pearce 1958: 5.171, original emphasis).
Or perhaps better:
The abductive suggestion comes to us like a flash. It is an act of insight, although extremely fallible insight. It is true that the different elements of the hypothesis were in our minds before; but it is the idea of putting together what we had never before dreamed of putting together which flashes the new suggestion before our contemplation (Pearce 1988: 227, original emphasis).
In effect, abduction is the process of arriving at an explanatory hypothesis or a process of generating a hypothesis. As Eldridge explains,
For Peirce, abduction works from these surprising facts to determine a possible, plausible explanation.  Furthermore, Peirce stresses the fact that the logic of abduction is fallible – abductive inferences, like induction, can, and do, lead us to the wrong result (Pearce 1958 5.189, 5.197, 6.532).  However, as a part of the triad, abduction is able to correct itself, once it is investigated by deduction and tested by induction (Pearce 1958 5.574).  Because of this, we should never take the conclusion of an abductive inference to be a fact in and of itself until it is tested.  Until that point “abduction commits us to nothing…it merely causes a hypothesis to be set down upon our docket of cases to be tried” (Pearce 1958 5.602).  Furthermore, by hypothesis, Peirce does not just mean scientific hypotheses.  Abduction certainly includes the more formalized, conscious cognitive process of deliberately searching for an explanation to a set of particular facts; however, abduction is also a logical inference used in everyday life from crude hypotheses (his Catholic priest example) to perceptual judgments (understanding the information that we receive from our senses) (Pearce 1958 7.202, 5.180, 5.184) (Eldridge n.d.). 
Patterns were made popular as a heuristic for thinking about the new problematics introduced by software systems through the work of the architect Christopher Alexander (1936-), particularly Notes on the Synthesis of Form (Alexander 1964), The Timeless Way of Building (Alexander 1979), and A Pattern Language (Alexander et al. 1977) which influenced computer scientists, who found useful parallels between building design and the practice of software design (Rybczynski 2009)Alexander's central premise in his books, 'driving over thirty years of thoughts, actions, and writings, is that there is something fundamentally wrong with twentieth century architectural design methods and practices' (Lea 1997). Indeed, A Pattern Language was originally written to enable any citizen to design and construct their own home although it is arguable that he has had more influence on computer scientists than architects. As Appleton explains, patterns 'are a literary form of software engineering problem-solving [approach] that has its roots in a design movement of the same name in contemporary architecture... [they enable a] common vocabulary for expressing its concepts, and a language for relating them together. The goal of patterns within the software community is to create a body of literature to help software developers resolve recurring problems encountered throughout all of software development' (Appleton 2000).
The Timeless Way of Building and A Pattern Language were written as a pair, with the former presenting rationale and method, and the latter concrete details. They present a fresh alternative to the use of standardized models and components, and accentuate the philosophical, technical and social-impact differences between analytic methods and the adaptive, open, and reflective (all in several senses) approach to design. The term pattern is a preformal construct (Alexander does not ever provide a formal definition) describing sets of forces in the world and relations among them. In Timeless, Alexander describes common, sometimes even universal patterns of space, of events, of human existence, ranging across all levels of granularity. A Pattern Language contains 253 pattern entries. Each entry might be seen as an in-the-small handbook on a common, concrete architectural domain. Each entry links a set of forces, a configuration or family of artifacts, and a process for constructing a particular realization. Entries intertwine these 'problem space', 'solution space', and 'construction space' issues in a simple, down-to-earth fashion, so that each may evolve concurrently when patterns are used in development (Lea 1997).
Patterns are therefore reusable, structured, or formalised ways of doing things or processing information and data. Alexander himself defined each pattern as:

a three-part rule, which expresses a relation between a certain context, a problem, and a solution. As an element in the world, each pattern is a relationship between a certain context, a certain system of forces which occurs repeatedly in that context, and a certain spatial configuration which allows these forces to resolve themselves. As an element of language, a pattern is an instruction, which shows how this spatial configuration can be used, over and over again, to resolve the given system of forces, wherever the context makes it relevant. The pattern is, in short, at the same time a thing, which happens in the world, and the rule which tells us how to create that thing, and when we must create it. It is both a process and a thing; both a description of a thing which is alive, and a description of the process which will generate that thing (Alexander 1979: 247).
The antithesis to a pattern, is called an anti-pattern, that is patterns that describe (i) a bad solution to a problem which resulted in a bad situation, or (ii) describe how to get out of a bad situation and how to proceed from there to a good solution (Appleton 2000; Brown et al. 1998).  Patterns and pattern languages provide a broader framework to think about questions of paradigmatic means of designing and implementing computational systems. Indeed, in many cases patterns are used in this way to indicate a set of means for the development of software at a macro level. It should also be noted that patterns can be combined with other patterns to produce new patterns at a higher level of complexity, indeed this is the idea behind Alexander's (1977) notion of a 'pattern language'. Within software design it is quite common to see three levels noted, namely from most abstract to more concrete: Architectural Patterns, Design Patterns and Implementation Patterns, the last being detailed, programming-language-specific patterns as idioms (Microsoft 2012).

Within computer science, and particularly related to the more micro level problem of recognising patterns themselves within data sets automatically using computation, is an important and challenging area of research. The main forms of pattern recognition (we can think of these as patterns to find patterns) used in computation are usually enumerated as template-matching, prototype matching, feature analysis, recognition by components, fourier analysis, and lastly bottom-up and top-down processing. I'll briefly describe each of the six main approaches.

Template Matching: This is where a computational device uses a set of images (or templates) against which it can compare a data set, which might be an image for example (for examples of an image set, see Cole et al. 2004).

Template Matching (Jahangir 2008)
Prototype Matching: This form of patten matching uses a set of prototypes, which are understood as an average characteristic of a particular object or form. The key is that there does not need to be a perfect match merely a high probability of likelihood that the object and prototype are similar (for an example, see Antonina et al. 2003).

Feature Analysis: In this approach a variety of approaches are combined including detection, pattern dissection, feature comparison, and recognition. Essentially the source data is broken into key features or patterns to be compared with a library of partial objects to be matched with (for examples, see Morgan n.d.).

Recognition by Components: In this approach objects are understood to be made up of what are called 'geons' or geometric primitives. A sample of data or images is then processed through feature detectors which are programmed to look for curves, edges, etc. or through a geo detector which looks for simple 2D or 3D forms such as cylinders, bricks, wedges, cones, circles, and rectangles (see Biederman 1987).

Fourier Analysis: This form of pattern matching uses algorithms to decompose something into smaller pieces which can then be selectively analysed. This decomposition process itself is called the Fourier transform.  For example, an image might be broken down into a set of twenty squares across the image field, each of which being smaller, is made faster to process. As Moler (2004) argues, 'we all use Fourier analysis every day without even knowing it. Cell phones, disc drives, DVDs, and JPEGs all involve fast finite Fourier transforms'. Fourier transformation is also used to generate a compact representation of a signal. For example, JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of the digital image. The Fourier components of each square are then rounded to lower arithmetic precision, and weak components are discarded, so that the remaining components can be stored in much less computer memory or storage space. To reconstruct the image, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image, this is why the image can produce 'blocky' or the distinctive digital artefacts in the rendered image, see JPEG (2012).

Bottom-up and Top-down Processing: Finally, in the Bottom-up and Top-down methods an interpretation emerges from the data, this is called data-driven or bottom-up processing. Here the interpretation of a data set to be determined mostly by information collected, not by your prior models or structures being fitted to the data, hence this approach looks for repeated patterns that emerge from the data. The idea is that starting with no knowledge the software is able to learn to draw generalisations from particular examples. Alternatively an approach where prior knowledge or structures are applied data is fitted into these models to see if there is a 'fit'. This approach is sometimes called schema-driven or top-down processing. A schema is a pattern formed earlier in a data set or drawn from previous information (Dewey 2011).

What should be apparent from this brief discussion of the principles of abduction and pattern-matching in computer science is their creative possibilities for generating results from data sets. The ability to generate  hypothesises on the basis of data, which is fallible and probabilistic allows for computational devices to generate forecasts and predictions based on current and past behaviours, data collection, models, and images. It is this principle of abductive reason which makes computational reasoning different from instrumental reason, and particularly from the iron-cage of logical implication or programmatic outcome that instrumental reason suggests. Indeed Alexander that the most useful patterns are generative,
These patterns in our minds are, more or less, mental images of the patterns in the world: they are abstract representations of the very morphological rules which define the patterns in the world. However, in one respect they are very different. The patterns in the world merely exist. But the same patterns in our minds are dynamic. They have force. They are generative. They tell us what to do; they tell us how we shall, or may, generate them; and they tell us too, that under certain circumstances, we must create them. Each pattern is a rule which describes what you have to do to generate the entity which it defines. (Alexander 1979: 181-182)


Alexander, C. (1964) Notes on the Synthesis of Form, Harvard University Press.

Alexander, C., S. Ishikawa, & M. Silverstein (1977) A Pattern Language, Oxford: Oxford University Press.

Alexander, C. (1979) The Timeless Way of Building, Oxford: Oxford University Press.

Antonina, K., Barbro, B., Hannu, V., Jarmo, t. and Ari, V. (2003) Prototype-Matching System for Allocating Conference Papers, accessed 31/03/2012, http://www.hicss.hawaii.edu/HICSS36/HICSSpapers/DTDMI06.pdf

Appleton, B. (2000) Patterns and Software: Essential Concepts and Terminology, accessed 31/03/2012, http://www.cmcrossroads.com/bradapp/docs/patterns-intro.html

Bateson, G. (1979) Mind and Nature: A Necessary Unity, (Advances in Systems Theory, Complexity, and the Human Sciences). Hampton Press, accessed 31/03/2012, http://www.oikos.org/mind&nature.htm

Biederman, I. (1987) Recognition-by-Components: A Theory of Human Image Understanding, Psychological Review, 1987, Vol. 94, No. 2,115-147, accessed 31/03/2012, http://www.cim.mcgill.ca/~siddiqi/COMP-558-2012/rbc.pdf

Brown, W., Malveau, R., McCormick, H. and Mowbray, T. (1998) AntiPatterns, accessed 31/03/2012, http://www.antipatterns.com/

Cole, L, Austin, D., Cole, L. (2004) Visual Object Recognition using Template Matching, accessed 31/03/2012, http://www.araa.asn.au/acra/acra2004/papers/cole.pdf

Dewey, R. A. (2011) Top-down and Bottom-up Processing http://www.intropsych.com/ch07_cognition/top-down_and_bottom-up_processing.html

Dixon, D. (2012) Analysis Tool or Design Methodology? Is there an epistemological basis for patterns?, in Berry, D. M. (ed.) Understanding Digital Humanities, London: Palgrave.

Eldridge, M. (n.d.) Clarifying the Process of Abduction and Understanding “Inductive” Generalization, accessed 31/03/2012, http://www.philosophy.uncc.edu/mleldrid/SAAP/USC/TP26.html

Janhangir, N. (2008) Genetic Algorithm Driven Template Matching In ActionScript 3.0, accessed 31/03/2012, http://nadimissimple.wordpress.com/2008/12/11/genetic-algorithm-driven-template-matching/

JPEG (2012) JPEG Homepage, accessed 31/03/2012, http://www.jpeg.org/jpeg/index.html

Lea, D. (1977) Christopher Alexander: An Introduction for Object-Oriented Designers, accessed 31/03/2012, http://g.oswego.edu/dl/ca/ca/ca.html

Microsoft (2012) Organizing Patterns, accessed 01/04/2012, http://msdn.microsoft.com/en-us/library/ff647589.aspx

Moler, C. (2004) Numerical Computing with MATLAB, accessed 31/03/2012, http://www.mathworks.se/moler/chapters.html

Morgan, M. (n.d.) Feature Analysis, accessed 31/03/2012, http://www.staff.city.ac.uk/~morgan/FeatureAnalysis.pdf

Peirce, C. S. (1958) The Collected Works of Charles Sanders Peirce, Harvard University Press.

Peirce, C. S. (1988) Pragmatism as the Logic of Abduction, in The Essential Peirce: Selected Philosophical Writings, 1893—1913, Bloomington: Indiana University Press.

Rybczynski, W. (2009) Do You See a Pattern?, Slate, accessed 31/03/2012, http://www.slate.com/articles/arts/architecture/2009/12/do_you_see_a_pattern.html

Schurz, G. (2008) Patterns of Abduction, Synthese, 164 (2): 201-234.

30 March 2012

Code, Foucault and Neoliberal Governmentality

For Foucault, Neoliberal governmentality is a particular form of post-welfare state politics in which the state essentially outsources the responsibility for ensuring the 'well-being' of the population. The primary recipient of this responsibility is derived from a strengthened notion of the subject, as a rational individual. Indeed, these new subjectivities are expected to 'look after themselves'. This form of governmentality has an extremely diffuse form of rule whereby strategies and imperatives of control are distributed through a variety of media but are implicated in even the most mundane practice of everyday life. As Schecter writes,

Foucault regards the exercise of power and the formalisation of knowledge to be intimately bound up with the constitution of living individuals as subjects of knowledge, that is, as citizens and populations about whom knowledge is systematically constructed... Subjects are not born subjects so much as they become them. In the course of becoming subjects they are classified in innumerable ways which contribute to their social integration, even if they are simultaneously marginalised in many cases (Schecter 2010: 171). 

So for example, the state promotes an ethic of self-care which is justified in terms of a wider social responsibility and which is celebrated through the examples given in specific moments represented as individual acts of consumption that contribute to a notion of good citizenship. So using recycling bins, caring for one's teeth, stopping smoking, and so forth are all actively invested by the state as both detrimental to individual and collective care, but most importantly they are the responsibility of the citizen to abide by.

Neoliberal governmentality also gestures towards the subordination of state power to the requirements of the marketplace, the implication being that 'political problems' are re-presented or cast in market terms. Within this framework citizens are promised new levels of freedom, consumerism, customisation, interactivity and control over their life and possessions. In other words, they are promised an unfulfilled expectation as to the extent to which they are able to exert their individual agency.

In order to facilitate this governmental platform certain infrastructural systems need to be put in place, bureaucratic structures, computational agencies and so forth. For example, it has become increasingly clear that providing information to citizens is not sufficient for controlling and influencing behaviour. Indeed, people's ability to understand and manipulate raw data or information has been found to be profoundly limited in many contexts with a heavy reliance on habit understood as part of the human condition.

It is here that the notion of compactants (computational actants) allows us to understand the way in which computationality has increasingly become constitutive of the understanding of important categories in late capitalism, like privacy and self-care. Here we could say that we are interested in a transition from the juridicification, through the medicalisation, to the 'computationalisation' of reason. Hence, following Foucault, we are interested in studying the formation of discrete powers rather than power in general. That is, Foucault is interesting 'in the processes through which subjects become subjects, the truth becomes truth, and then changing conditions under which this happens, which in the first instance is the discrepancy between the visible and the readable' (Schecter 2010: 173). Or as Foucault himself writes:

What is at stake in all this research about madness, illness, delinquency, and sexuality, as well as everything else I have been talking about today, is to show how the coupling of a series of practices with a truth regime forms an operative knowledge-power system (dispotif) which effectively inscribes in the real something that does not exist, and which subjects the real to a series of criteria stipulating what is true and what is false, whereby these criteria are taken to be legitimate. It is that moment which does not exist as real and which is not generally considered relevant to the legitimacy of a regime of true and false, it is that moment in things that engages me at the moment. It marks the birth of the asymmetrical bi-polarity of politics and economics, that is, of that politics and economics which are neither things that exist nor are errors, illusions or ideologies. It has to do with something which does not exist and which is nonetheless inscribed within the real, and which has great relevance for a truth regime which makes distinctions between truth and falsity (Foucault, The Birth of Bio-Politics, quoted in Schecter 2010: 173).

Indeed the way in which compactants generate certain notion of truth and falsity is a topic requiring close investigation, both in terms of the surface interface generating a 'visible' truth, and the notion of a computational, or cloud, truth that is delivered from the truth-machines that lie somewhere on the networks of power and knowledge.
Foucault suggests that if there is a 'system' or ensemble of systems, the task is somehow to think systemic functioning outside of the the perspective of the subject dominated by or in charge of the so-called system. Critical thinking can deconstruct the visible harmony between casual seeing and instrumental reason... in contrast with monolithic appearances, surfaces are characterised by strata and folds that can inflect power to create new truths, desires and forms of experience (Schecter 2010: 175).
Here we can make the link between sight and power, and of course sight itself is deployed such that the 'visible' is not transparent nor hidden. Compactants certainly contribute to the deployment of the visible, through the generation of certain forms of geometric and photographic truths manifested in painted screens and surfaces.


Schecter, D. (2010) The Critique of Instrumental Reason from Weber to Habermas, New York: Continuum.

29 March 2012

The Commodity-Mechanism Form of Software/Code

This post is part of a paper I gave at Unlike Us conference, Amsterdam 2012. The recording of the lecture is available online

Software presents a translucent surface relative to the common 'world' and so enables engagement with a 'world', this we often call its interface (although increasingly the surface is replacing the interface). It is tempting, when trying to understand software/code to provide analysis at the level of this surface level, however software also possesses an opaque machinery that mediates engagement that is not experienced directly nor through social mediations. Without an attentiveness to the layers of software beneath this surface we are in danger of 'screen essentialism'.  In terms of this analytic approach, one of the key aspects is that the surface can remain relatively stable whilst the machinery layer(s) can undergo frenetic and disorienting amounts of change (Berry 2012). This frantic disorientation at the machinery layer is therefore insulated from the user, who is provided with a surface which can be familiar, skeuomorphic (from the Greek, skeuos - vessel or tool, morphe - shape), representational, metonymic, figurative or extremely simplistic and domestic. It is important to note that the surface need not be visual, indeed it may be presented as an application programming interface (API) which hides the underlying machinery behind this relatively benign interface.

The software we use is part of a wider constellation of software ecologies made possible by a plethora of computational devices that facilitate the colonisation of code into the lifeworld. In other words, software enables access to certain forms of mediated engagement with the world, this is achieved via the translucent surface interface and enables a machinery to be engaged which computationally interoperates with the world. These engagements are enabled by processes we might call compactants (computational actants) which can be understood through a dual surface/machinery structure. Compactants are often constructed in such a way that they can be understood as having a dichotomous modality of data-collection/visualisation, each of which is a specific mode of operation. Again this may not necessarily be a visual component of the compactant, which may merely re-present data through computational analysis to a visual packager or visualisation device/software system. This modal setting may be accessible to the user, or it may be a hidden function accessible only to certain people/coder/other compactants, etc.

Compactants are designed to passive-aggressively record data.  With the notion of compactants I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, social signals, streams of affectivity and so forth.  The word compact also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin compact for closely put together, or joined together, also nearly expresses the sense of what web-bugs and related technologies are. The term compactants is also evocative in terms of the notion of 'companion actants' (see Harraway 2003).

Analytically, therefore, software can be said to have two faces:

Commodity: accessible via the interface or surface and providing or procuring a commodity/service/function. Provides a relative stability for the consumption of ends.

Mechanism: accessible via textual source code, which contains the mechanisms and functions ‘hidden’ in the software (means).  This can be thought of as the substructure for the overlay of commodities and consumption. 

The materiality of software requires a form of reading/writing of these depths through attentiveness to codes affordances. By attending to the ontological dimension of software, that is it structure and construction, we gather an insight into the substructure and machinery of software. Software is used/enjoyed without the encumbrance or engagement with its context due to this commodity form. 

One of the striking things about using this analytical model for thinking about software is that it draws attention to a source of stability in computational society. That is, the commodity layer, the interface, may stay relatively stable vis a vis the user, whilst underneath at the level of the machinery there can be rapid change in terms of both hardware and software. In a usual case, the user is unlikely to notice much difference in the usability of the device, however the interface's constant allows for a de-freneticness or at least a looser coupling between rapid technical change and the user experience of technology. We should expect that when interfaces achieve a certain retinal quality, making them indistinguishable from other representational forms, such as high definition images or photography, then further developments will begin to be made in terms of the skeuomorphic/figurative/metonymic. Indeed, to some extent this is already starting to happen within user interface design with the move to 'simple' or 'obvious' design principles (see Beecher 2010) and the greater use of a surface as a means to access a device. 


Beecher, F. (2010) UI Guidelines for Skeuomorphic Multi-Touch Interfaces , accessed 29/03/2012, http://userexperience.evantageconsulting.com/2010/11/ui-guidelines-for-skeuomorphic-multi-touch-interfaces/

Berry, D. M. (2012) Thinking Software: Realtime Streams and Knowledge in the Digital Age, UnlikeUs 2012, accessed 29/03/2012, http://vimeo.com/39256099

Harraway, D. (2003) The Companion Species Manifesto: Dogs, People, and Significant Otherness, Prickly Paradigm Press.

22 March 2012

Dionysians and Apollonians: Letter from Albert Szent-Györgyi to Science (1972)

Winner of the  Nobel Prize in Physiology or Medicine (1937) Albert Szent-Györgyi (b1893-d1986) photo taken in 1948. 
Dear Editor,

Wilhelm Ostwald (1909) divided scientists into the classical and the romantic. One could call them also systematic and intuitive. John R.Piatt (personal communication) calls them Apollonian and Dionysian. These classifications reflect extremes of two different attitudes of the mind that can be found equally in art, painting, sculpture, music, or dance. One could probably discover them in other alleys of life. In science the Apollonian tends to develop established lines to perfection, while the Dionysian rather relies on intuition and is more likely to open new, unexpected alleys for research. Nobody knows what 'intuition' really is. My guess is that it is a sort of subconscious reasoning, only the end result of which becomes conscious.

These are not merely academic problems. They have most important corollaries and consequences. The future of mankind depends on the progress of science, and the progress of science depends on the support it can find. Support mostly takes the form of grants, and the present methods of distributing grants unduly favor the Apollonian. Applying for a grant begins with writing a project. The Apollonian clearly sees the future lines of his research and has no difficulty writing a clear project. Not so the Dionysian, who knows only the direction in which he wants to go out into the unknown; he has no idea what he is going to find there and how he is going to find it. Defining the unknown or writing down the subconscious is a contradiction in absurdum. In his work, the Dionysian relies, to a great extent, on accidental observation. His observations are not completely 'accidental', because they involve not merely seeing things but also grasping their possible meaning. A great deal of conscious or subconscious thinking must precede a Dionysian's observations. There is an old saying that a discovery is an accident finding a prepared mind. The Dionysian is often not only unable to tell what he is going to find, he may even be at a loss to tell how he made his discovery.

Being myself Dionysian, writing projects was always an agony for me, as I described not long ago in Perspectives of Biology and Medicine (Szent-Györgyi 1971). I always tried to live up to Leo Szilard's (personal communication) commandment, 'don't lie if you don't have to'. I had to. I filled up pages with words and plans I knew I would not follow.When I go home from my laboratory in the late afternoon, I often do not know what I am going to do the next day. I expect to think that up during the night. How could I tell then, what I would do a year hence? It is only lately that I can see somewhat ahead (which may be a sign of senescence) and write a realistic proposal, but the queer fact is that, while earlier all my fake projects were always accepted, since I can write down honestly what I think I will do my applications have been invariably rejected. This seems quite logical to me; sitting in an easy chair I can cook up any time a project which must seem quite attractive, clear, and logical. But if I go out into nature, into the unknown, to the fringes of knowledge, everything seems mixed up and contradictory, illogical, and incoherent. This is what research does; it smooths out contradiction and makes things simple, logical, and coherent. So when I bring reality into my projects, they become hazy and are rejected. The reviewer, feeling responsible for 'the taxpayer's money', justly hesitates to give money for research, the lines of which are not clear to the applicant himself. A discovery must be, by definition, at variance with existing knowledge. During my lifetime, I made two. Both were rejected off-hand by the popes of the field. Had I predicted these discoveries in my applications, and had these authorities been my judges, it is evident what their decisions would have been. These difficulties could perhaps, be solved to some extent, by taking into account the applicant's early work. Or, if the applicant is young and has had no chance to prove himself, the vouching of an elder researcher acquainted with the applicant's ability may be considered. The problem is a most important one, especially now, as science grapples with one of nature's mysteries, cancer, which may demand entirely new approaches.

Albert Szent-Györgyi (1972)


Szent-Györgyi, A. (1972) Dionysians and Apollonians. Science 176:966.

Ostwald, W. (1909) Grosse Männer. Leipzig: Akademische Verlagsgesellchaft GMBH

Disqus for Stunlaw: A critical review of politics, arts and technology