18 • Opponent processing
A continuous trade-off between selection for efficiency and variation for resiliency enable our capacity for growth. Processes that appear to work against each other cause progress on a higher level.
Continuing our quest to understand cognition as a dynamical system, last time we looked at how such systems evolve through differentiation and integration at the same time. Today we will look further into this fascinating interplay between two opposing processes.
Paying attention
Paying attention right now — how are you doing it?
You're sort of doing it, but you're also sort of participating in it. To a large degree it's self-organizing, because you pay attention to what's obvious to you.
But what's actually happening is you have two networks: the task focus network in your brain and the default mode network. And what's happening right now, possibly, I hope not too much, is:
The default mode network is making you mind-wander. Your attention is going to other things.
And then the task focus network kills off most of those options and brings back your attention to the talk. But you might have kept a couple of the things you were thinking about, because they actually help you understand the talk better.
Then, wander away, focus back, wander away, focus back — now if you'll note, that's exactly the same structure of biological evolution: In biological evolution you have a process that introduces variation — processes, I should say — and then you have natural selection that winnows them down. And then from that it reproduces — this is the self-organizing cycle. You get a new morphology — variation, selection, variation, selection — in a reproductive self-organizing cycle.
That's exactly what your attention does. It just does it massively faster than biological evolution. You're doing it right now. You're evolving your attentional cognitive fittedness to the environment.
Optimal grip
You're working towards getting what Merleau-Ponty called an optimal grip on the world. What does that mean?
If I want to look at an object, how do I orient towards it? If I want to read my phone, [holding it far away] might be too bad, unless I forgot my glasses. If I want to use my phone as a weapon, I want to maybe orient it closer this way. I might want to look at some detailed thing and I have to bring it close up. The thing is, these are all trade-off relationships. […] What you're constantly doing is you're trading between these opposing goals, because as you get one you lose the other.
Pay attention to what I just said about attention: Attention is doing what's called opponent processing: You have one process that's working towards one goal — open up your awareness, open up your attention — an opposing process that's winnowing down. And they are dynamically coupled together, so they evolve your attention. As you do the opponent processing and trade off between all these options, you evolve your sensory motor [loop] until you get to the best fit for the problem at hand. That is your optimal grip. You're doing that all the time. You're doing it right now.
This example gives us an intuitive understanding of how our cognition evolves its grip on our current environment. But why would it do that?
Bio-economies
Think of your biology as economic — this is part of Darwin’s great insight.
Now don’t be confused here! When a lot of times people hear "economic" they hear "financial economy". That’s not what an economy is. An economy is a self-organizing system that is constantly dealing with the distribution of goods and services, the allocation and use of resources, often in order to further maintain and develop that economy.
Your body is a bio-economy. You have valuable resources of time, metabolic energy, processing power (think about how we say we “pay attention” by the way). And what you do as an auto-poetic thing is, you are organized such that the distribution of those resources serves the constitutive goal. It will serve other goals, of course, but it serves the constitutive goal of preserving the bio-economy itself.
And the thing about economies, of course, is they’re self-organizing. They are "bio", they are part of, they come out of your biology. They are not semantic or syntactic properties (we use semantic and syntactic terms to talk about them; let’s not keep making that confusion). They are multi-scaled. Economies work locally and globally simultaneously bottom-up, top-down.
Bio-economic properties are great and that’s good because that comports well with the analogy because Darwin’s theory is ultimately a bio-economic theory.
Logistical norms
Can we think about what kind of norms are at work in a bio-economy? […]
Economies are regulated by logistical norms. Logistics is the study of the proper disposition and use of your resources. If you are doing a logistical analysis for the military you are trying to figure out how my limited resources — food, and ammo, and personnel, and time and space — how can I best use them to achieve the goals I need to achieve?
What are logistical norms? Logistical norms are things like efficiency and resiliency. […] A way of thinking about these is: resiliency is basically long-term, broadly applying efficiency. But instead of using efficiency and efficiency, which is confusing, we’ll talk about efficiency and resiliency.
What if relevance realization is this ongoing evolution of our cognitive-interactional fittedness, that there is some virtual engine that is regulating the sensory-motor loop, and it is regulating it by regulating the bio-economy, and it’s regulating the bio-economy in terms of logistical norms like efficiency and resiliency?
Now all of this, of course, can be described scientifically, mathematically, etc., because, of course, Darwin’s theory is a scientific theory. We can do calculations on these things, etc. One more time: The fact that I use science to talk about it does not mean that it exemplifies propositional properties. My properties of my theory and the properties that my theory is about, are not the same thing.
What kind of relationship? How do we put this notion of self-organization and this notion of the logistical norms governing the bio-economy together? One way of doing this is to think about a multi-scalar way in which your bio-economy is organized to function; a multi-scalar way, many scales of analysis. There is a way in which your bio-economy is organized to function.
Example: autonomous nervous system
Let's take your autonomic nervous system as an example. This is not exhaustive, in fact my point is: You will find this strategy, this design at many levels of analysis in your biology. I'm only using this as an example.
Your autonomous nervous system is part of your nervous system that is responsible for your level of arousal. That doesn't mean sexual arousal. Arousal means how — and notice how this is logistical — how much your metabolic resources are being converted into the possibility of action, interaction. You have a very low level of arousal when you're falling asleep. You have a very high level of arousal when you're running away from a tiger.
Think about this: There is no final, perfect design on your level of arousal. There isn't a level that you should always shoot for:
You shouldn't maximize your level of arousal. If I'm always, "ARGGHHHHH!", that's not good. I'm never going to sleep, I'm never going to heal.
If I'm just like, always, “Ok, that's it, I’m going to sleep”, that's not good.
And the Canadian solution? “Well, I'll always have a middling level of arousal.” That's not good either, because I can't fall asleep and I can't run away from the tiger.
So what does your autonomic nervous system do? Your autonomic nervous system is divided into two components: There is your sympathetic and your parasympathetic.
Your sympathetic system is really biased. It's designed towards interpreting the world in a way, it's biased — remember, the things that make us adaptive also make us susceptible to self-deception — it's biased, because you can't look at all of the evidence. It's biased to looking for and interpreting evidence that — and I mean, evidence non-anthropomorphically — that you should raise your level of arousal.
Your parasympathetic system is biased the other way. These are both heuristic ways of processing. They work in terms of biasing the processing of data. The parasympathetic system is constantly trying to find evidence that you should reduce your level of arousal. They're opposed in their goal. But here's the thing: They're also interdependent in their function.
The sympathetic nervous system is always trying to arouse you. And the parasympathetic system is always trying to pull you down. And as the environment changes that tug of war shifts around your level of arousal.
Opponent processing
Opponent processing is when you have two systems that are opposed, but integrated. You have opponent processing. The opponent processing means that your level of arousal is constantly evolving to fit the environment.
Is it perfect? No, nothing can be. Any problem solving machine in order to be perfect, would have to explore the complete problem space. That's combinatorially explosive. It can’t. But what is this? You've seen this before. Opponent processing is a powerful way to get optimization. […]
You're optimizing between systems that are working on different goals, but are integrated in their function. And that way the system constantly self-organizes and it then thereby evolves its fittedness to the environment.
Efficiency or resiliency?
The way we can get this, I would argue, is by thinking about how the brain — and I am going to argue very importantly, the embodied, embedded brain — uses opponent processing in a multi-scalar way in order to regulate your auto-poetic bio-economy, so that it is constantly optimizing your cognitive-interactional fittedness to the environment.
Let's think about it this way: Let's think if we can get a virtual engine out of efficiency and resiliency, because here's the thing about them: they are in an opponent relationship.
They pursue… “pursue” — the problem with language. It's like Nietzsche said: "I fear we are not getting rid of God because we still believe in grammar." The problem with languages is it makes everything sound like an agent. It makes everything sound like it has intentionality. It makes everything sound like it has intelligence. And of course that's not the case. So bear with me about this! I have to speak anthropomorphically just because that's the way language makes me speak.
Let's use a financial analogy to understand the trade-off relationship between efficiency and resiliency. Not all economies are financial because the resource that's being disposed of in an economy is not necessarily money. It might be time, etc. I'm using a financial analogy, or at least a commercial analogy, perhaps is a better way of putting it, in order to try and get some understanding of how these are in a trade-off relationship.
So you have a business. One of the things you might do is you might try to make your business more efficient because — ceteris paribus — if your business is more efficient than that person's business, you're going to outcompete them. You're going to survive and they're going to die off — obviously the analogy to evolution.
What do I do? What I do is I try to maximize the ratio between profit and expenditure/cost. We keep thinking of it as the magical solution, but we've been doing it since Ronald Reagan, at least. We do massive downsizing. We fire as many people as we can in our business. And that way, what we have is we have the most profit for the least labor costs. That's surely the answer, right? Notice what efficiency is doing. Notice how efficiency is a selective constraint.
The problem is if you are “cut to the bone”, if you "reduced all the fat", if you've got all the “efficiencies” — and this is the magic word that people often invoke — without remembering, and forgetting the relationship, the opponent relationship to resiliency.
If I cut my business to the bone like that, what happens if one person is sick? Nobody can pick up the slack because everyone is working to the max.
What happens if there's an unexpected change in the environment, a new threat or a new opportunity? Nobody can take it on because everybody is worked the limit. I have no resources by which I can repair, restructure, redesign myself. I don't have any precursors to new ways of organizing because there is nothing that isn't being fully used.Notice also, if there's no slack in my system — and this is now happening with the way AI is accelerating things — error propagates, massively and quickly. If there's no redundancy, there's no slack in the system, there's no place, there's no wiggle room and error just floods the system.
If I make the system too efficient, I lose resiliency. I lose the capacity to differentiate, restructure, redesign, repair, exapt new functions out of existing functions, slow down how error propagates through the system.
Efficiency and resiliency are in a trade-off relationship.What resiliency is trying to do is enable you to encounter new things, enable you to deal with unexpected situations of damage, or threat, or opportunity. It's enabling.
These are in a trade-off relationship. As I gain one, I lose the other.What if I set up a virtual engine in the brain that makes use of this trade-off relationship? It sets up a virtual engine between the selective constraints of efficiency and the enabling constraints of resiliency and that virtual engine bio-economically, logistically shapes my sensory-motor loop with the environment, so it's constantly evolving its fittedness.
Apart from its function of giving a plausible explanation for how our cognition works, I find this mental model of opponent processing perhaps the most important and useful piece in Vervaeke’s whole series.
It allows you to transcend polarization — the perspective of two opposing forces having to be statically balanced, often by suppressing one force while amplifying the other. If you can step out of this limiting view and realize that there is an additional dimension of evolution over time, then both opposing processes turn from a simplistic good/evil framing into necessary ingredients of a larger process of development.
Sometimes selection “wins” and reduces options to capture efficiencies. Sometimes variation “wins” and provides options to respond to threats or to unlock new opportunities. Neither is inherently good or bad. Both are required to make progress.
I find this mental model incredibly powerful and versatile. Whenever I come across polarization (and isn’t it easy to come across it these days?), I now look for the possibility of the opponent forces to dynamically interact with each other in such a way that both are contributing to growth over time.
It is so easy to feel like you are stuck in an endless circle of each side sometimes winning, sometimes losing, without making any progress. But as you slightly shift your perspective, you notice that the two-dimensional circle is actually a three-dimensional spiral, and you have been moving the whole time on that third dimension you couldn’t see before.
Complexifying your cognition
What's constantly happening in your brain is this process of evolving your fittedness, is constantly trading the trade-off relationships between integrating information and differentiating it.
A system is complex, if it simultaneously is integrated and differentiated. My hand is highly differentiated, but it's highly integrated. When something is both highly integrated and highly differentiated, it can do many different things without falling apart. Complexification enhances agency. It gives you your emergent abilities.
You have your cognitive orientation. It's giving you your basic navigation in this situation. […] You're evolving at many levels your sensory motor fittedness to the world. And you are constantly complexifying your cognition in order to track and couple to the complexity of the world.
This talk is fairly complex. You're switching back and forth very rapidly in your brain. We can measure it, it's called metastability, between integrating information and differentiating it. Differentiating it to evolve it, and integrating it to evolve it. Variation, selection. Differentiation varies, integration selects.
Insight (again)
We know that this process is capable of massive self-correction. Sometimes you realize the way you've been framing something is making you pay attention to the wrong things, making you regard the wrong things as relevant. You may say something like, "Oh! I thought she was angry, but she's actually afraid." I've been paying attention to the wrong things. I hadn't noticed what I need to pay attention to.
When you have that “Aha!” moment — that's a moment of insight. That's a moment of emergence of a new way of fitting yourself to the world. You break an inappropriate frame. You break it up: "Oh, I shouldn't be looking at her this way!" And you reorient: "I should be looking at her that way." Frame breaking and frame making show up in insight, and insight is an instance of a powerful speeding up of the process of the cognitive evolution of your fittedness to the world.
What we know, unfortunately, as true, is that intelligence, while so impressive, is only very weakly predictive of rationality. You take all the standard measures for intelligence, they all strongly predict each other. You take standard measures for rationality (this is the work of Stanovic and others), they all strongly predict each other. Two strong positive manifolds. But the measure for general intelligence and the measure for general rationality are only correlated at 0.3. Intelligence is necessary but nowhere near sufficient for rationality.
Why is that? Because think about what I said: You're intelligent by ignoring most of the information. Remember my example of your interaction with the woman? Sometimes the information you ignore is exactly the information you need to solve your problem. You realize that in a moment of insight.
Self-deception (again)
But the opposite is also constantly possible for you: You don't realize that you misframe the situation. You're finding the wrong things salient. And therefore you're judging the wrong things to be relevant or important.
And we live in a world that is exacerbating that. Harry Frankfurt called that bullshit. And I don't mean to be offensive, that's the term of his famous essay. The liar cares about your concern for the truth and tells you something and tries to get you to believe it to be true, because of your commitment to the truth. The bullshit artist does something different: They try to make something salient to you and have you not care about whether it's true or false.
All advertising works this way. You watch some advertisement about shampoo and there's beautiful gorgeous people, who are obviously genetically perfect, experiencing nothing but happiness as they wash their hair with this shampoo. You know it's not true. Like, if you believe that's true, you're insane. You know it's not true. Why do they do it though? Because you buy the product. Because salience — how things are relevant and obvious to you — catches your attention and directs your behavior.
That kind of self-deceptive, self-destructive behavior is a perennial permanent threat to your cognition. You're always susceptible to that foolishness. Which means we need processes that help to ameliorate it. You can't shut off relevance realization or you're doomed. So you need something that can increase the amount of insight you have, your cognitive flexibility, your ability to re-adapt to refit yourself to situations.
What's very powerful about this is that across time, history, culture, environment you see people coming up with sets of practices for ameliorating that self-deception and enhancing that cognitive fittedness. The name that's most usually used for those practices is wisdom.
See, knowledge deals with ignorance, but wisdom deals with foolishness. And those are not the same thing. You can be very intelligent and very foolish, and there is no contradiction there at all.
Mirror of the Self is a weekly newsletter series trying to explain the connection between creators and their creations, and analyze the process of crafting beautiful objects, products, and art. Using recent works of cognitive scientist John Vervaeke and design theorist Christopher Alexander, we embark on a journey to find out what enables us to create meaningful things that inspire awe and wonder in the people that know, use, and love them.
If you are new to this series, start here: A secular definition of sacredness.
For an overview and synopsis of the first 13 articles, see: Previously… — A Recap.