Computational functionalism is the view that computations of some kind are sufficient to instantiate consciousness1. It depends on what a system does, not on what it is made out of.2
Substrate independence is the claim that the same kind of conscious states can occur in systems with different physical substrates – i.e., systems made out of different kinds of stuff.
Substrate flexibility says they are not tied to carbon-based biological substrates but might be realisable in some other, but not necessarily all, types of material. Computational functionalism is intimately related to substrate flexibility. If computational functionalism holds, then some substrate flexibility follows.
Non-computational functionalism includes theories which say that consciousness might depend on patterns of functional organisation (feedback, control, dynamical systems) instead of computations.
Biological naturalism is the claim that consciousness is a property of only (but not necessarily all) living systems. For example:
- The standard predictive processing story is that perceptual content is not ‘read out’ from incoming sensory signals, but is instead given by the brain’s ‘best guess’ about the causes of these sensory signals. In Bayesian terms, perceptual contents correspond to approximately optimal posterior beliefs, given by a weighted combination of prior beliefs and sensory signals (likelihoods). Since exact Bayesian inference is generally analytically intractable, the brain implements an approximation: prediction error minimisation. The core claim from the perspective of consciousness is that conscious contents are given by the brain’s ‘best guesses’.
- Autopoietic (“self-creating”) nature of living systems, such as cells or entire organisms, constantly regenerate and maintain a boundary between itself and its surroundings. Unlike other kinds of things, like rocks, or computers, these systems actively (re-)generate their own material basis and maintain their boundaries over time. From the perspective of the free energy principle, this means that they actively resist the dispersion of their internal states – a dispersion otherwise mandated by the second law of thermodynamics. This in turn means they must exist in a state of low entropy, maintaining themselves out of thermodynamic equilibrium with their environment, in order to fend off the inevitable descent into disorder. To stay alive means to continually minimise entropy because there are many more ways of being mush than there are of being alive. It is sometimes said that staying alive involves resisting the second law. It is better to say that living systems take advantage of the second law, since it is the transformation of low entropy fuel/food into high entropy products through metabolism that enables living systems to remain in non-equilibrium (quasi) steady-states. Core idea: Predictive processes that underpin all conscious experiences are inextricably tied to the autopoietic and nature of living matter.
https://doi.org/10.1017/S0140525X25000032
- Consciousness: an entity ‘feels like’ something to be a conscious system – there is a conscious experience happening – whereas it doesn’t feel like anything to be an unconscious system. Intelligence: what a system can do. ↩︎
- One of the best known arguments for substrate flexibility in this context is the neural replacement thought experiment. The basic scenario is that a person’s brain cells are replaced, one by one, with silicon alternatives. Each silicon brain cell exactly replicates the input/output mapping of its biological counterpart, in all situations. Eventually, the person’s brain has only silicon parts, yet its functional organisation is entirely preserved, and so – from the outside – the person would behave exactly as before. If replacing one brain cell doesn’t make a difference to the person’s consciousness – and this seems unlikely – then why should replacing one hundred, or all of them? Would consciousness simply fade away? And, if so, is it plausible that their behaviour remains unchanged while conscious experience completely disappears? The way out of these strange implications seems to be to accept silicon substrate flexibility. ↩︎