Why I reject computationalism
Since taking courses in cognitive psychology a decade ago, I continue to hear from various cognitivists that the mind does not have “direct” access to reality. Instead, they claim, there are only re-presentations of reality (which come before pre-sentations, allegedly), and some sort of symbol system manipulates them post-hoc. Ever since, I’ve been trying to “find” these re-presentations. And between myself and now half a dozen psychotherapists over the years, I’ve decided they cannot be found because they do not exist in the first place.
Let me define some terms.
Computationalism remains the most fashionable view of what mind is and how it works (Margolis et al., 2012). It is the internalist thesis that mind is a “machine-like” entity that performs rule-based operations (“manipulations”) of symbolic objects entailing phonemes, morphemes, images, numbers, and other entities, and then combines (“computes”) them into another entity called a “representation”, through which the person then uses as a perceptual lens onto yet another separate realm called “reality”.
Before we get distracted by the regress ad infinium and implied Cartesianism, I’ll clarify the premises from their own words (in Margolis et al., 2012):
“Reality” is assumed to be a purely material (“physical”), mind-independent realm.
Mind is only the result of a physical “symbol system” located inside of the brain.
This physical symbol system is the creator and bearer of re-presentations of reality (computationalists thus reject all forms of realism).
These representations are only the result of the manipulation of “symbols” (such as letters, numbers, imagery, and so on) and no other entities (in Egan, 2012).
Thus, consciousness is reduced to a word processor like MS Word.
For example, suppose a computationalist wants to explain how a hypothetical Jane would experience a car accident. To him, at the moment of impact, Jane’s physical symbol system (and no other entity anywhere in the entire body), performs a series of manipulations of symbols to then create a “re-presentation” of the vehicle colliding into hers, and only after this re-presentation has become salient (more symbols!?!), does she then experience what it is like to be in a car accident.
We could use another example, since they claim this is how all conscious agents are, so they must apply to all possible experiences, right?
Suppose someone experiences sexual assault at a young age, such that they do not (yet) have any formal concepts or prior experiences of what boundaries are – a common problem described by Dorahy and others (2022). According to the computationalist, said victims do not have epistemic access to their own body nor reality whatsoever, not even implicitly. Instead there are only re-presenation of sexual assault involving symbol crunching, but not direct, non-verbal, bodily-experiences (see also Lowen, 1967).
Taken together, sensorimotor phenomena (aka, “feelings”), along with the HPA axis and sympathetic nervous system, are secondary or epiphenomenal. That is they’re collectively deemed a-causal “side effects” after the discrete computation of a (virtual) re-presentation of a vehicular impact or sexual assault. So computationalism bans all notions of bodily experiences in the situated “present”, instead privileging symbolic manipulations from some other spatiotemporal dimension.
Let’s analyze the first premise:
“Reality is assumed to be a purely material (‘physical’), mind-independent realm”.
I find it strange how they concluded there is 1) only a material world but that 2) it’s mind independent. To make any claims about anything at all requires a first-person perspective, not just an epistemology. Otherwise, what other agent is making such conclusions? To be clear, I’ll grant it’s epistemically coherent to say “There’s a material world out there, because this mind, which is also material, is interacting with it in material ways (e.g. by smell, touch and so on)”. Sure, that’s consistent.
But to then say “Actually, there’s no mind directly interacting with it at all, there’s instead a symbol system that mediates it, so it’s 100% mind-independent” is a prima facie contradiction. Because to have reached the first conclusion – of there being a purely material reality – necessitates a mind-dependent first-person (embodied-physical) perspective. Again, the first premise is fine. What is problematic is then to deny the empirical experience of the physical embodied observer (e.g. “the mind”) at step two, post-hoc, by inventing a medium out of thin air called a representation that, per the computationalists own words, may not be reliable anyways!
I’ll repeat myself a third time, because I continue to encounter people who cannot grasp the absurdity:
1). They said the mind is material, and it interacts with a material realm, by material means. This is perfectly fine.
2). But then they say the same mind, which was used to make the metaphysical claim about materialism being a fact, now all the sudden might be an unreliable system that cannot be trusted because it uses re-presentations post-hoc.
And yet they call themselves “realists”.
Some might object that Bernard Kastrup has raised similar objections toward materialism and thereby argue I am not “properly” critiquing computationalism but “instead” the former. However, because computationalism is a materialist paradigm, this seems like fair game to me. In fact, Kastrup routinely embraces representationalism but at other times relies upon direct, first-person, non-verbal, psychedelic and other altered states of consciousness to defend his version of metaphysical idealism (e.g. Kastrup, 2014). Is that not self-defeating?
To that end, let’s turn to the second and third premise:
“Mind is only the result of a physical “symbol system” located inside of the brain.
“This physical symbol system is the creator and bearer of re-presentations of “reality”
First, it seems to presuppose that it is impossible to engage in rock climbing without there first being “symbol crunching”. Indeed, I continue to point this out to a friend when we try more challenging routes: “So, which complex equations have you calculated, a-priori, before you intend to swing your hip horizontally and simultaneously lift your opposing foot into that blind ledge, that you missed thrice now?”. Without hesitation, his response always involves so-called ‘hypopriors’ under the predictive processing paradigm (PP). In his defense, PP is certainly the “sexiest” version of computationalism to date. I will freely admit it’s a wonderful theory from a pragmatic, rather than “realist”, lens. For it embraces bottom-up (parts-to-wholes) and top-down (wholes-to-parts) ways of manipulating information, so to some extent is compatible with downward causation in the emergentist literature.
On the other hand, most PP theorists implicitly believe mental states are only driven by Bayesian logic. This part I find problematic: Bayesian logic was only invented in the 20th century. It has no historical basis extending beyond the reducto-physico “mechanistic age”. So, I find it an extremely bold claim to rely upon a mathematics that only recently appeared and in no other era in history, and then take the second step that it is the only and eternal way of knowing anything whatsoever for the rest of time.
Now let’s go back to my main issue with computationalism. Because of it’s exclusive reliance on re-presentations and outright dismissal of presentations (which by definition, temporally come first), I find it harmful to mental well-being insofar it forces us into the “hyper-analytic” left hemisphere and thereby de-synchronizes the right hemisphere. For the latter is constitutive of embodied, “in the present”, states of mind and all varieties of Gestalt perception (e.g. wholes to wholes, wholes to parts, parts to wholes).
As Iain McGilchrist elaborates:
“…what is at stake here is nothing other than the nature of time itself. Analysis is a (1) fragmentation into (2) static instants of a (3) ‘re’-presentation, by definition ‘after’ the fact and (4) in an abstract realm; it has all the hallmarks of the left hemisphere about it. By contrast our intuitive understanding takes as an (1) indivisible whole (2) the nature of motion as it is (3) present within (4) embodied experience, and has all the hallmarks of the right hemisphere about it (McGilchrist, 2021, p. 646, original italics).”
His point is that this style of thinking, of re-presenting sensory phenomena (whether real or imaginary) is harmful in the long term as it induces a habitual bodily disconnection. But I disagree with him that there are people in the world with pre-existing “broken hemispheres” and that somehow it’s primarily a genetic problem and not a societal or learned one (see p. 305-70). Rather if we take his thesis seriously, the entire field of cognitive science too is morally guilty of artificially creating the intellectual conditions, which behave as emergent causal constraints “top-down”, that cause people to feel detached from reality (e.g., the living body and the present). Among others, Matthew Ratcliffe too has called this as an “epistemic pathology” and a “poverty of the mechanistic world” (2008, p. 289-92). That is, my claim is that the longer one spends in the “mechanistic” cognitive science, the more likely they are to become mentally ill. In turn, fueling even more suffering onto the rest of the world via re-projections of re-presentations. So, perhaps the “psy”-fields do not actually deserve the titleship of “psy”-anything, because the Greek root psyche literally translates to soul or self. But cognitive sciences no longer study selves either. Instead, they study disembodied symbol systems and other mathematical abstractions.
Thomas Fuchs (2021) reaches a similar conclusion by describing how the last century of developments in film, television, and other technological mediums has caused the West to again become “disembodied”. Rather than experiencing the body itself as a medium through which one entrains and is entrained by others across space and time, the very ingredients to neural plasticity, they rely upon static and inanimate objects and other machines (called iPhones and laptops) that inevitably remove us from the temporal present. That is, what Georg Northoff elsewhere calls synchrony.
What does it mean for a computational mind to be “machine-like”, anyways? The most common answers we are given by the computationalists themselves (in Margolis et al., 2012) is they’re:
physical or material
purely deterministic
a closed system
an input-output scheme
operate purely under material and efficient causality without formal or final cause
I’ve already covered why the last premise must be wrong elsewhere. And the input-output scheme is already de-bunked by the fact creativity, flow, and other nonlinear altered states universally exist.
On the other hand, I have no immediate reservations with the materialist claim. I’ve repeatedly found that publicly disclosing anything but materialism will immediately result in negative attributions or a loss in social status: “That guy’s a nutcase!”. But to me, the material realm is immanently real – what Aristotelians have called moderate realism. I guess I’m willing to “play along”, so to speak. But it does not follow that matter is primary and consciousness is secondary (see Koons, 2018).
On the other hand, we could still use a “filter theory of consciousness” that Kastrup (2014) and other idealists seem to be promoting. That is, the brain does not so much “produce” consciousness as it does “constrain it”. In our context, however, via an alleged symbol system. However, no one in cognitive science has ever found where this mysterious symbol system is located, considering they insist it is physically reducible. Instead, it’s yet another “hidden variable” theory that cannot be falsified (see Varela et al., 2016, p. 37-57).
But let’s pretend there’s a symbol system anyways, perhaps McGilchrist is correct to say it’s “somewhere” in the left hemisphere. Frequently, they deploy the words “hardware” to refer to brain matter and “software” to mind. So the next question is, whence in the universe did this “software” arise and from whose first perspective? Do the computationalists have their own variety of autopoiesis? Or a first cause or prime mover? No?
They have also insisted it’s purely deterministic and a closed system. Yet all biologists will acknowledge humans are not closed systems at all: they exchange energy and matter with the world in non-equilibrium steady-states. That is, “homeostasis” has no business anywhere in cognitive science. For something to be quantitatively homeo-static means it is literally a heap (and therefore dead). Rather, the closed system argument appears to be a pawn to ruse naïve readers into believing they’re deterministic computers – as it were “cogs in the machine”.
Let us now “zoom out” and ask ourselves a “common-sense” question. Rather than judging a paradigm by what it purports or its empirical evidence, no less how aesthetically pleasing it might be, why not analyze it by what societal effects it has? That is by what it does, outside the ivory towers? To the commoner?
For starters, we can re-frame this as: How do people interact with computers, or any other sort of machines, daily? That is by their volitional actions – not their words?
The only correct answer is that computers (and all other machines) are only valued to the extent they can produce work, that is, to be used as tools. Afterwards they are thrown in the trash. Hence, the dilemma the computationlists face long term is whether it’s a “good idea”, morally speaking, to indoctrinate society into believing one’s value is implicitly reducible to how exploitable they are. In this sense, the computationalist inevitably conceptualizes humans not as humans, but as tools for material gain. Much like how Reddit publicly disclosed it will be selling all prior user data so it can be manipulated with future A.I. and then re-sold for more surveillance capitalism.
Even my climbing friend (the computational apologist) agrees with me on this point. He then justifies this by way of appealing to grandiose varieties of transhumanism and simulation theories promoted by David Chalmers, wherein humans are no longer synchronized with themselves nor other bodies, nor even Nature (to use Spinoza’s word). Rather, they’re “hooked up” to some A.I.-based “collective consciousness” owned by mega corporations.
The last time I nearly stormed out of his apartment, I asked “But why would you join them? Have I not shown you what happens to overly mechanistic minds?” and his response was to the effect of “If you can’t beat them, join them”.
Is this not what Sanneke de Haan means by “disorders of existential sense-making”?
Dorahy, M.J., Gold, S.N., & O’Neil, J.A. (2022). Dissociation and the dissociative disorders: Past, present, future (2nd ed). New York: Routledge.
Egan, F. (2012). Representationalism. In Margolis, E., Samuels, R., Stichj, S. P. (Eds), The Oxford Handbook of Philosophy of Cognitive Science (p. 251-72). Oxford University Press.
Fuchs, T. (2021). In defense of the human being. Oxford University Press.
Kastrup, B. (2014). Why materialism is baloney: how true skeptics know there is no death and fathom answers to life, the universe, and everything. Iff books.
Koons, R. (2018). “Against Emergent Individualism,” in J. P. Moreland, Jonathan Loose, Angus Menuge (eds.), Blackwell Companion to Substance Dualism. Wiley Blackwell.
Lowen, A. (1967). The betrayal of the body. Bioenergetics Press.
Margolis E., Samuels, R., Stichi, S.P. (2012). The Oxford Handbook of Philosophy of Cognitive Science. Oxford University Press.
McGilchrist, I. (2021). The Matter with Things: Our brains, our delusions, and the unmaking of the world (Vol. 1). Perspectiva Press.
Ratcliffe, M. (2008). Feelings of being. Oxford University Press.
Varela, F. J., Rosch, E., & Thompson, E. (2016). The embodied mind. MIT Press.




