Mind or machine? Who cares?
Are machines becoming more like us? Will they take over soon? Before we get too worried, let’s look at what separates minds from machines.
I didn’t plan to write much about artificial intelligence and although I was aware of the role of AI research as part of cognitive science, I didn’t expect it to become so… relevant (hah!) so quickly. It’s almost uncanny how much the current debate around artificial intelligence is intertwined with the meaning crisis and the issues at the intersection of design, cognitive science, and technology, which I try to highlight in this newsletter.
Reasoning machines
Is reasoning just a form of computation? Long before we had computers, back in the 17th century, this question was debated between Rene Descartes and Thomas Hobbes. Surprisingly, even though Descartes basically paved the way for the scientific revolution with his thinking, he had a pretty clear stance on computation1 too. While his trick of viewing everything as a mechanical machine to be taken apart for analysis defined modern science and the universe as meaningless matter following universal rules for the following three centuries, he did not believe that reasoning is just computation. He believed that humans do more than just abstract symbol manipulation — he believed we care.
Science is teaching us that the world is purposeless, matter is meaningless, there's no normative standard or structure in matter. It's just actually how it is. And how it actually is, is valueless. [Descartes to Hobbes:] "Hobbes, matter lacks meaning, purpose, normativity. It’s inert. How could you possibly get all of those things out of matter?"
How could you? If you are a reasoner, you care about the truth. And yet truth depends on meaning, purpose, at least the pursuit of truth, and normative standards of how things ought to be. And none of those are in matter.
Hobbes responds and says, “You know what it's like? What I can have is: I can have my abacus and it's automated, and I have little pieces of paper on them, and the pieces of paper are manipulated” — much like letters on your computer screen. “And if they're manipulated in the right way, I get a meaningful sentence: The cat is on the mat”.
Then Descartes says, “Hobbes, you're being an idiot, because you're making a fundamental mistake here. First of all, you're English, I'm French. I don't have [the word "cat"], I have [the word "chat"]. Physically, these are two very different things. Yet we're both thinking about the same, miaow miaow, creature.”
There's no intrinsic meaning in those material marks. If waves on a shore happen to scratch the pebbles so that [the word "Hi!”] appears on the beach, would you think the ocean is talking to you? That'd be ridiculous. It's just random grooves cut in the sand by the water. It has no intrinsic meaning. These things only have meaning because they are associated with ideas in your mind. And those ideas actually possess meaning.
Do you see what Descartes is saying? He's saying, "look, you have a view of matter that makes the rationality that you are holding out to be to central actually deeply problematic."
This is what we need to pay attention to when we invoke rationality as a standard. Of course, we should invoke rationality as a standard. But, first of all, two things we should note:
The idea that rationality is just the logical manipulation of propositions is something we should question, because […] that's not historically accurate. That's a particular view that we see from Descartes.
Secondly, Descartes himself rejects that because he realizes that rationality is caring about the truth on purpose according to normative standards and values. None of that machinery can actually be found in the scientific model of matter.
What is actually deeply mysterious in our culture right now, although it is invoked religiously — and I mean that — is exactly the notion of rationality itself.
This is not me advocating irrationality. Not at all! I am against the advocation of [rationality] as if it is a philosophically unproblematic phenomenon. That is irresponsible and seriously misleading.
To survive and thrive we need to adapt to reality, so we are constantly looking for patterns that tell us more about how the world works. Rationality requires that we care about the truth on purpose according to normative standards and values.
Can cold, algorithmic calculation, mere manipulation of symbols, provide that?
Why do we care?
Our cognition relies on a bio-economy that allocates our cognitive resources to avoid combinatorial explosion. At the same time our cognition is of utmost importance to our survival. This is why we have to care about how our resources are allocated, how we process information, and what information we process.
Relevance Realization is not cold calculation. It is always about how your body is making risky, affect-laden choices of what to do with its precious, but limited cognitive and metabolic and temporal resources.
Relevance Realization is deeply, always […] an aspect of caring. That's what Read Montague argues — the neuroscientist — in his book Your Brain Is Almost Perfect. That what makes us fundamentally different from computers, because we are in the finitary predicament, is we are caring about our information processing and caring about the information processed therein.
This is always affect. It's things are salient, they're catching your attention. They're arousing, they're changing your level of arousal — remember how arousal is an ongoing, evolving part of this? And they are constantly creating affect, motivation, moving, emotion, moving you towards action. You have to hear how at the guts of consciousness, intelligence, there is also caring. That's very important.
That's very important because that brings back, I think, a central notion — and I know many of you are wondering why I haven't spoken about him yet, but I'm going to speak about him later — from Heidegger. That at the core of our being in the world is a foundational kind of caring. And this connection I'm making, this is not farfetched.
Look at somebody deeply influenced by Heidegger, who was central to the third generation, or 4E cognitive science: That's the work of Dreyfus and others. Dreyfus has had a lot of important history in reminding us that our knowing is not just propositional knowing, it's also procedural and, ultimately, I think, perspectival and participatory. He doesn't quite use that language, but he points towards it. He talks a lot about optimal gripping and, importantly, if you take a look at his work Being In The World on Heidegger, when he's talking about things like caring, he's invoking, in central passages, the notion of relevance.
And when he talked about What Computers Can't Do and later on What Computers Still Can't Do, what they're basically lacking is this Heideggerian machinery of caring, which he explicates in being in the world in terms of the ability to find things relevant.
This of course points again towards Heidegger's notion of dasein. That our being in the world — to use my language — is inherently transjective, because all of this machinery is inherently transjective. And it is something that we do not make. We, and our intelligible world co-emerge from it. We participate in it.
We find things relevant. Not just on an abstract, propositional level, but on a deep, subconscious, participatory level relevance realization is what enables all of our higher-order cognitive functions. We care, because caring is fundamentally built into our deepest levels of perception and cognition.
Reasoning requires emotion
The deep divide between emotion and reason, a divide that I think is enshrined and ossified in our cognitive cultural grammar, and the ongoing battle between the empiricists and the romantics, between John Locke and Rousseau, is being addressed. […]
We can think of the work of Damasio in Descartes’ Error. […]Damasio is basically showing that people without emotion, even though all of the calculative machinery may be operating normally well, means […] that although all the calculative machinery is operating well, they're disconnected from their emotions. They are incapacitated as cognitive agents.
Why? Because without emotion, without the caring that is integral to relevance realization… Remember what Reed said? That we're different from communities and computers, that we have to care about emotion. And why do we? We have to care about information. Why do we have to care about information? Because we ultimately have to take care of ourselves, because of the kind of beings we are. When you don't have that [caring], you face combinatorial explosion.
There's a deep interconnection between being embodied, being a relevance realizer, and having emotions. Emotions is the way in which relevance realization is brought up into the level of your salience landscaping. And what emotions do is they craft, they shape and sculpt the salience landscaping, such that an agent-arena relationship becomes obvious and apparent to you. When you are angry, you assume a particular role, you assign a bunch of identities, and it's obvious to you what you should be doing.
I would make this prediction: That as we move towards making — and we're going to come back to this issue about artificial general intelligence — as we move towards making artificial general intelligence, we are already having to give these machines something deeply analogous to attention2, and we're going to, I would predict, have to give them something analogous to emotion.
I think it's better to talk about this. I would put the two together. That within religio, you always have caring/coping. And that's the core of your cognitive agency. The emotion also carries up to the relationship — and of course, this goes back to agape — that emotion is how we coordinate the attachment. I don't mean it in the Buddhist sense, I mean in the psychological sense — the attachment relationships between individuals — such that we create persons, who are capable of dwelling within and coordinating their efforts within distributed cognition. We create persons within communities of persons that shape themselves, their community, and their world to fit together in an ongoing agent-arena fashion.
Within the relevance realization framework, the important role of emotion to sculpt your salience landscape becomes obvious. The objectivist separation of reason and emotion does not make sense, because they are deeply interconnected at the core of our cognition.
Yet, when we design and build things, whether complex artificial intelligence or other software or products or buildings, we usually still focus entirely and exclusively on reason — the objective that can be reasoned about logically — and ignore emotion — the subjective that has no place in an objectivist, mechanical worldview.
Is it possible that this has an effect on how we perceive these things that we designed that way?
Infected by banality
Christopher Alexander in The Nature of Order, book IV [highlights mine]:
Of all the periods in human history, ours is perhaps the period in which architecture has been most barren spiritually, most infected by banality. I myself have become aware only slowly during the last thirty years, of the way that this artistic barrenness follows directly from our contemporary mechanism-inspired cosmology. But I have finally come to believe that it is just the prevailing views we hold about the mechanical nature of the universe which have led directly to a situation in which great buildings — even buildings of true humility — almost cannot be made.
I say that even humble buildings cannot be made, because the infection which comes from our mechanistic cosmology, is mainly one of arbitrariness — and the arbitrariness breeds pretension. In the presence of pretentiousness, true humility is almost impossible. A truly humble cottage even, seems beyond the reach of most builders today.
Alexander criticizes buildings here, but his criticism applies to everything we build and design. Many products and particularly software seem to be well described by words like banal, arbitrary, and pretentious. Few designers and developers really care, but those who do, seem to know something the others don’t.
Jony Ive on care and carelessness
Care is a tough word in some ways to understand. I think it's easier to understand carelessness, which I see as it being a disregard for people.
Carelessness to me is just seeing people as a potential revenue stream, not the reason to work moderately hard to really express your love and appreciation for the rest of the species. So for us in our practice of design I think care is very often felt and not necessarily seen. […]
Steve talks about the carpenter, the cabinet maker that would finish the back of the drawer. And it's that you're bothered beyond whether something is actually publicly seen; you do it not because there's an economic interest, you do it because it's the right moral decision.
Particularly as a designer I think it's very often in the very small quiet things, like worrying about how you package a cable or…
[Interviewer: “You clearly worry about that a lot.”]
Yeah, I worry about that ever such a lot.
And Steve worried about that a lot as well.I think it's that preoccupation… You know when you're sat there on a Sunday afternoon worrying about the power cable that's packaged as a ziggy-zag thing and you're going to take that little wire tie off. When you're sat there on a Sunday afternoon worrying about this isn't really very good, the only reason you're very aware, the reason you are there, is because our species deserves better. It deserves some thought. And it's a lovely way of… I don't know… I think you feel connected.
I think this is a worthy perspective to keep in mind as we create and deploy (non-sentient and non-conscious) artificial intelligence that is capable to mimic our creative output in the form of text and images (and soon music, video, 3D assets, and more).
It is safe to say that at least the current generation of artificial intelligence doesn’t care, because it has not been designed in a way that it possibly could. And just giving it even more data to digest and learn from is not going to change that.
That does not mean this kind of AI is not useful. It is.
And it also does not mean that we won’t eventually develop AI that genuinely cares. We will. But it’s not here yet.
What worries me much more today, is that although we are “designed” to care, we seem to have given up doing so.
I don’t think the problem is that machines are about to become as or even more powerful than we are. I think the real problem is that we become more and more like machines, who don’t care.
We are forgetting our “love and appreciation for the rest of the species”. And if we lose ours, why would an artificial general intelligence of our design ever have it?
Mirror of the Self is a weekly newsletter series investigating the connection between creators and their creations, trying to understand the process of crafting beautiful objects, products, and art.
Using recent works of cognitive scientist John Vervaeke and design theorist Christopher Alexander, we embark on a journey to find out what enables us to create meaningful things that inspire awe and wonder in the people that know, use, and love them.
If you are new to this series, start here: 01 • A secular definition of sacredness.
Overview and synopsis of articles 01-13: Previously… — A Recap.
Overview and synopsis of articles 14-26: Previously… — recap #2.
My recent presentation Finding Meaning in The Nature of Order.
To be clear, they of course didn’t call it “computation” back then. Also they didn’t use the term “artificial intelligence” back then, but Hobbes basically talked about artificial intelligence.
One of the defining scientific papers that are at the core of the transformer technology which makes large language models like GPT possible, is the paper with the title “Attention Is All You Need”.
Very interesting read!