Thursday, November 14, 2013

The Construction of Light

"And if a bird can speak, who once was a dinosaur
And a dog can dream; should it be implausible
That a man might supervise
The construction of light?"
                                       – Adrian Belew

Yesterday I gave a keynote lecture at Beyond AI 2013:  Aritficial Golem Intelligence, in Pilsen, Czech Republic.  As I explained at the beginning of the talk, I adopted a broadcast (many topics covered lightly), rather than my usual narrowcast (a single topic covered in depth, with arguments!), strategy.

My apologies for the sound:  there were technical difficulties with the microphones at several points.

Approaching artificial general intelligence (AGI) from the perspective of machine consciousness (MC), I will briefly address as many of the topic areas of the conference as possible within the time allotted:

  • The mind is extended, but Otto's beliefs are not in his notebook; Prosthetic AI vs AGI (Nature of Intelligence)
  • Scepticism about MC and the threat of atrocity (Risks and Ethical Challenges)
  • Theodicy, the paradox of AI, and the Imago Dei; Naturalising the Spiritual (Faith in AGI)
  • Narrative, dreams and MC; Herbert's Destination Void as research programme (Social and Cultural Discourse)
  • How to fix GOFAI; the Mereological Constraint on MC (Invoking Emergence)
  • Artificial creativity as embodied seeking of the subjective edge of chaos (AI and Art)

Streaming video (hosted externally)
Slides (.pdf)

Tuesday, November 12, 2013

Update: All media are again accessible

Apologies to all of you who have been trying to access media at e* -- they are now accessible again.

Thank you for your interest,


Wednesday, July 04, 2012

Making Predictive Coding More Predictive, More Enactive

Making Predictive Coding More Predictive, More Enactive
Ron Chrisley, Sackler Centre for Consciousness Science & Dept of Informatics, University of Sussex, UK
Presented at the 16th annual meeting of the Association for the Scientific Study of Consciousness
Corn Exchange, Brighton, July 3rd, 16:30-18:30: Concurrent Session 2.

Predictive coding (PC) architectures (e.g., Dayan, Hinton, Neal & Zemel, 1995; Rao & Ballard, 1999) have been recently proposed to explain various aspects of consciousness, including those involved in binocular rivalry (Hohwy, Roepstorff & Friston, 2008), and presence (“the subjective sense of reality of the world and of the self within the world”) (Seth, Suzuki & Critchley, 2011). It is argued that the potential of PC explanations of consciousness has been obscured by overemphasis of a number of features that are typically held to be essential to the PC approach, but which in fact are not central, and may be detrimental, to PC explanations of consciousness. For example: 1) the components of PC architectures that do the work of explaining consciousness can be de-coupled from hypotheses concerning (e.g. Bayesian) optimality; 2) the structure of the models employed by PC architectures is typically not predictive in any direct sense, being instead a representation of the causes of sensory input (Hohwy, Roepstorff & Friston, 2008); 3) these models are typically disconnected from action, accruing the familiar limitations of disembodied accounts (with (Seth, Suzuki & Critchley, 2011) being a notable exception); 4) the winner-take-all promotion of a model to be the content of consciousness can be eliminated, thus enabling PC architectures to accommodate anti-realist or at least more critically realist views of consciousness (Dennett 1991). A more general architecture, Enactive EBA (following (Chrisley & Pathermore, 2007)), is proposed to preserve the strengths of PC architectures, while avoiding the above limitations and suggesting new hypotheses and experiments to test them.


Sunday, October 24, 2010

Painting an experience? How aesthetics might assist a neuroscience of sensory experience

IULM University, Milan, hosted a European Science Foundation Exploratory Workshop on "Neuroesthetics: When art and the brain collide" on the 24th and 25th of September, 2009. In my invited lecture, I departed significantly from my advertised title, instead using my time to introduce the audience to five strands in my research related to the intersection of neuroscience/cognitive science and art/creativity:

  • Embodied creativity
  • Enactive models of experience
  • Synthetic phenomenology
  • Interactive empiricism
  • Art works/installations


Further links:

Sensory Augmentation, Synthetic Phenomenology and Interactive Empiricism

Helena de Preester using the Enactive Torch

On Thursday the 26th and Friday the 27th of March, 2009, the e-sense project hosted the Key Issues in Sensory Augmentation Workshop at the University of Sussex. I was invited to speak at the workshop; my position statement (included below) serves as a good (if long) summary of my talk.


Sensory Augmentation, Synthetic Phenomenology & Interactive Empiricism: A Position Statement

How can empirical experiments with sensory augmentation devices be used to further philosophical and psychological enquiry into cognition and perception?

The use of sensory augmentation devices can play a crucial role in overcoming conceptual roadblocks in philosophy of mind, especially concerning our understanding of conscious experience and perception. The reciprocal design/use cycle of such devices might facilitate the kind of conceptual advance that is necessary for progress toward a scientific account of consciousness, a kind of advance that is not possible to induce, it is argued, through traditional discursive, rhetorical and argumentative means.

It is proposed that a philosopher's experience of using sensory augmentation devices can play a critical role in the development of their concepts of experience (Chrisley, Froese & Spiers 2008). The role of such experiences is not the same as the role of say, experimental observation in standard views of empirical science. On the orthodox view, an experiment is designed to test a (propositionally stated) hypothesis. The experiences that constitute the observational component of the experiment relate in a pre-determined, conceptually well-defined way to the hypothesis being tested. This is strikingly different from the role of experience emphasized by interactive empiricism (Chrisley 2010a; Chrisley 2008), in which the experiences transform the conceptual repertoire of the philosopher, rather that merely providing evidence for or against an empirical, non-philosophical proposition composed of previously possessed concepts.

A means of evaluation is need to test the effectiveness of the device with respect to the goals of interactive empiricism and conceptual change. Experimental philosophy (Nichols 2004) looks at the way in which subjects' philosophical views (usually conceived as something like degree of belief in a proposition) change as various contingencies related to the proposition change (e.g., how does the way one describes an ethical dilemma change subjects' morality judgements of the various actions in that situation?; cf, e.g. (Knobe 2005)). One could apply this technique directly, by empirically investigating how use of sensory augmentation devices affect subjects' degree of belief in propositions concerning the nature of perceptual experience. However, it would be more in keeping with the insights of interactive empiricism if such experiments measured behaviour other than verbal assent to or dissent from propositions, such as reaction times and errors in classification behaviour. This might allow one to detect changes in subjects' conceptions of the domain that are not reportable or detectable by more propositional, self-reflective means.

Are there rigorous techniques that can characterise the subjective experience of using sensory augmentation technology?

Synthetic phenomenology is 1) any attempt to characterize the phenomenal states possessed, or modelled by, an artefact (such as a robot); or 2) any attempt to use an artefact to help specify phenomenal states (independently of whether such states are possessed by a naturally conscious being or an artefact) (Chrisley 2009; Chrisley 2010b; Chrisley 2008). Although "that" clauses, such as “Bob believes that the dog is running”, work for specifying the content of linguistically and conceptually structured mental states (such as those involved in explicit reasoning, logical thought, etc.), there is reason to believe that some aspects of mentality (e.g., some aspects of visual experience) have content that is not conceptually structured. Insofar as language carries only conceptual content, “that” clauses will not be able to specify the non-conceptual content of experience. An alternative means, such as synthetic phenomenology, is needed.

Earlier (Chrisley 1995), I had suggested that we might use the states of a robotic model of consciousness to act as specifications of the contents of the modelled experiences. This idea has been developed for the case of specifying the non-conceptual content of visual experiences in the SEER-3 project (Chrisley and Parthemore 2007a; Chrisley & Parthemore 2007b). Specifications using SEER-3 rely on a discriminative theory of visual experience based on the notion of enactive expectations (expectations the robot has to receive a particular input were it to move in a particular way). Depictions of the changing expectational state of the robot can be constructed in real time, depictions that require the viewer to themselves deploy sensory—motor skills of the very kind that the theory takes to be essential to individuating the specified content. Thus, the viewer comes to know the discriminating characteristics of the content in an intuitive way (in contrast to, say, reading a list of formal statements each referring to one of the millions of expectations the robotic system has).

Just as SEER-3 models, and permits the specification of, experiences in a modality we naturally possess (vision), so might other robotic systems, equipped with sensors that do not correspond to anything in the natural human sensory repertoire, model and permit the specification of other experiential states. As with the case of visual experience, specification cannot consist in a mere recording or snapshot of the sensor state at any moment, nor even in a sequence of such snapshots. Rather, the specification must be dynamically generated in response to the specification consumer’s probing of the environment (virtual or real), with the sensor values being altered in a way that compensates for both the subjectivity of the experience being specified, and that of the recipient herself.


Friday, September 17, 2010

Concepts and Proto-Concepts in Cognitive Science (part 2)

As explained in the previous post, in August of 2010 I gave two lectures as part of the annual Summer School of the Swedish Graduate School in Cognitive Science (SweCog; see The previous post contains the first of these lectures; this is part two. Near the end I showed a movie as a kind of dynamical specification of the non-conceptual content of visual experience modeled by the SEER-3 robot. This movie is not included in the PodSlide file; those interested in seeing it should download the supplementary file: "Non-conceptual content specification demo".


Thursday, September 16, 2010

Concepts and Proto-Concepts in Cognitive Science (part 1)

In August of 2010 I gave two lectures as part of the annual Summer School of the Swedish Graduate School in Cognitive Science (SweCog; see I was invited to speak on the topic "Cognition (or Consciousness) and Non-Conceptual Content", so I devoted the first lecture to getting clear on the nature of concepts. This allowed me to contrast conceptual content (which is, briefly: content that is articulable, recombinable, rational and deployable) with non-conceptual content, which was detailed in the second lecture (to follow).