19 • Here- and Now-ness
Our consciousness performs incredible work to present us with what we find salient in the present moment. Can we get a grasp on what consciousness does and is?
As we move deeper and deeper into the rabbit hole, explanations rely more and more on concepts explained earlier in this series. I’ve tried to link everything so that even if you start reading here today, you can jump back to the right spot of previous posts to learn more.1
And as we get a more detailed understanding of all the parts involved in consciousness, we are also connecting everything together.
We differentiate, and we integrate.
Here and now
What's the present moment? Is it right here, right now? That nanosecond? This second? The last five minutes? The last hour? What's the present moment? The word "present" doesn't have a particular meaning. It's called an indexical. It's relative.
What's here? What's now? People think, "I can tell you what the present moment is: It's paying attention to the here and now." That's useless.
What's "here"? This spot I'm standing on? This room? This city of Toronto? This solar system? This universe out of all of the universes in the multiverse? What's "now"?
You're not explaining anything. That language helps train people. But it's overly simplistic and misleading when we're trying to understand.
The words “here” and “now” are interesting — they don’t have a well-defined meaning, and yet we usually intuitively understand what is meant from the context. This linguistic peculiarity points directly towards the key function of consciousness.
The function of consciousness
Salience landscape
Part of what's happening, part of what consciousness is doing, is it's creating a salience landscape for me.
What does that mean for me? First of all, I'm picking out, out of all of the things I could pick out — and when I say "I", I don't mean "me", I mean "my consciousness" — I'm picking out some features.
You are not paying attention to every piece of information in this room. You can't. It's overwhelmingly vast. But you pick out on some. And then what you do is you begin… so you've already selected, and you start to prioritize it, and you foreground some of that.
For example, presumably, I'm foregrounded, and what's around me is backgrounded. And of course we've already seen it's going both ways. And notice again, what I'm looking at, what I'm looking through. And I'm taking the features and I'm starting to foreground them, and then I'm going to gestalt those features. I'm going to figure, I'm going to create a figure. We use this language of "figuring out". I'm making it stand out even more, more salient to me, and I'm also con-figuring it together.
All the features, and then foregrounded, then this [cup] is getting configured. This also is feeding back. And then of course I'm framing problems. […] So you've got a very complex dynamical system at work.
What's happening right now is your consciousness is creating a salience landscape. Some things are rising up out of unintelligibility as features that are getting foregrounded and configured. And then you're framing problems around them. And then things are shifting, your attention is shifting around. Other things are becoming salient, and you've got this highly textured, highly flowing, salience landscape.
That's what it's like to be here, right now.
Presence landscape
Now there's more going on of course. Part of what I'm doing, I get this salience landscape and my problem is around the cup, but I'm not quite sure, so I move around it. I try to get into an optimal position. If I get too close, I lose too much of the gestalt. And if I get too far away, I may see the whole thing but I'm losing the details. I need to get to the right place where I can — metaphorically and also literally in this sense — get what Merleau-Ponty calls an optimal grip on it.
So what I'm trying to do is: I'm optimizing between gestalt and feature, between looking through and looking at. I'm optimizing within this whole sizing up. I'm taking my salience landscape and I'm using it to get an optimal grip on things. Not maximal. And "grip" is meant here as a metaphor. It's meant for my contact, my interactional contact.
How can we understand what this optimal grip is doing? When I get the salience landscape and I adjust, an affordance opens up. What's an affordance? This goes back to Gibson, the idea of visual perception is this active process of landscaping.
The cup is graspable to me. That's not a property of the cup per se, because it's not graspable by a praying mantis. It's not a property just of my hand because my hand alone can't grasp. An affordance is setting up a relationship of coordination between the constraints in the thing and the constraints in my hand so that I can engage in an interaction. It's a way of co-identifying.
The cup is… this thing is… it's been made salient to me. I've got now an optimal grip on it such that I can create affordances. It is presenting itself to me, and I am configuring myself to it. It is graspable by me. And this is Gibson's point: You don't see colors and shapes, what you see are affordances. I see that this [floor] is walkable. That this [table] is where I can place things. That this [marker] is movable.
So, do you see? You get the basics: The salience landscape gets you in contact. Then you start the optimal gripping. And the optimal gripping gets you into the creation of affordances, where basically the agent and the arena are being co-identified. I'm a grasper and this is graspable. I am presenting myself to it and it is presenting itself to me.
Consciousness is setting up a salience landscape. But within this you're doing this process of sizing up and that produces a presence landscape. A whole affordance network is laid out for you.
Depth landscape
But that's not enough. […] You need to be able to track the differences between correlational patterns and causal patterns. As you interact with things, your brain is figuring out the causal patterns as opposed to the merely correlational. This is the depth landscape.
You see kids doing this: You got the two year old and they got this spoon. They pick up the spoon and they drop it on the floor. They pick it up and [strike the table with it]. And they do this over and over again. Why are they doing that? Because they're trying to use their salience landscape to generate affordances. The spoon is graspable, it's throwable, it's droppable.
But why do they repeatedly grasp and throw and drop? Because they're trying to figure out the causal patterns around the spoon. They're transforming the salience landscape into a presence landscape and that into a depth landscape. They're getting a deep kind of understanding — not in words, but interactionally — of the spoon.
This is what consciousness is doing for you. It's doing it right now. All of this is a way in which consciousness is helping you zero in on relevant information:
It's creating this textured salience landscape so that certain things stand out for you and other things don't as much. And it's constantly shifting dynamically.
And then within that it's creating a presence landscape of how you and what's salient are being co-identified, coupled together into an agent and arena relationship. And then it's also affording you. And that's dynamic because the affordances are constantly shifting.
And then that's affording you tracking the causal patterns, getting into deeper contact with the guts of the world. [depth landscape]
That's what consciousness is doing.
All of what we learned about in this series comes together in the present moment, where our (sub-)consciousness continuously performs relevance realization to present us with the salient concepts we can interact with to identify causal patterns that help us understand our environment.
Everything that stands out to you in this sentence you are reading right now, the way you are able to connect it to what you read before, in the sentences above it, or to what you remember from previous posts and other sources, or how you relate it to your own life and memories — all that is happening now in your own consciousness, individual to you and your experience, defining how you make sense of the (your?) world around you.
Salience tagging
Representations are aspectual
I hold this [pen] up and you form a representation of it. Remember all the things we talked about when we talked about categorization? We talked about similarity, etc. When you form a representation, you do not grasp all of the true properties of this object, because all of the true properties, the number is combinatorially explosive. […] Out of all the properties you just select some subset. And what subsets do you pick? You pick a subset that is — here it comes — relevant to you.
Are they just a feature list? No, we’ve already seen that a long time ago, they have a structural-functional organization, they are made relevant to each other. Here’s what we’ve got: A set of features that are relevant to each other. And then, a set of features that have been structurally-functionally organized so that they have co-relevance, is then relevant to me. That’s what an aspect is.
This is a marker. However I could change it’s aspectuality: It’s now a weapon! And we do that all the time. In fact one of the ways we check peoples creativity is to do exactly that: We will give some object and say how many different ways can you use it? How many different ways can you categorize it? Namely, how many different ways, how flexible are you in getting different aspects from the same object?
Representations are inherently aspectual. […] You’re zeroing in on relevant properties out of all the possible properties. You’re structuring them as how they are co-relevant to each other, and then how that structural-functional organization is relevant to you. Aspectuality deeply presupposes your ability to zero in on relevance, to do relevance realization. That means that representations can’t ultimately be the generators, the creators of relevance, they can’t be the causal origin of relevance.
Can representations feed back and alter what we find relevant? Of course, nobody is denying that. That’s of course why we use representations. But they can’t serve as the ontological basis, the stuff in reality that we’re trying to use to generate a noncircular account of relevance realization.
Now that’s going to tell us something really interesting. It’s going to tell us that if this meaning and this spirituality is bound to relevance realization, that the place to look for it is not going to be found at the level of our representational cognition, the level of our cognition that is using ideas, propositions, pictures, etc. Once again, I am not saying that those things do not contribute or affect what we consider relevant. What I am saying is that they are not the source, the locus of how we do relevance realization.
Salience tagging
I want to show you have this cashes out even in an empirical manner. This goes to some really interesting work done by Zenon Pylyshyn on what is called multiple object tracking. Multiple object tracking is really interesting. Basically, what you do is give people a bunch of objects on a computer screen, let’s say I have x’s and o’s and they’ll be different colors and different shapes, all kinds of different things like this. And what I do is I have the objects move around. And let’s say this was a red X and then after it moves around I ask you, “Where is the red X?” And you have to point at it. I may ask you, “Where is the green circle?”, “Where is the blue square?” You get the task.
Now what’s interesting is how much you can do this. You can track about eight, that’s on average, objects reliably. What’s really interesting about them is the more objects you track, the less and less features you can attribute to each object. What do I mean by that?
Suppose I’m tracking six shapes. Suppose I was tracking the red X and I have to keep it. I can, after lots of movement, say, “It’s there now. It started there, and it’s there now.” What I won’t notice during that, is that the red X has become, for example, a blue square! All of its content properties get lost. All I’m tracking […] is what you might call the here-ness, where is it, and the now-ness of it. Where is it? It’s here now, it’s here now, it’s here now, it’s here now, it’s here now. Everything else — its shape, its color, its categorical identity — all get lost.
He calls this FINSTing. This stands for FINgers of INSTantiation. Its basic idea is this: Your mind has something equivalent to putting your finger on something. I don’t know what this [water bottle] is. Suppose I didn’t know what it was, I put my finger on it. I don’t know what it is, I just know its here-now-ness. And it’s here now, it’s here now.
"Here" and "now" are indexicals. These are just terms that refer to the context of the speaker. So here, now. And it moves around, and my mind can keep in touch — notice my language: in touch, in contact — in touch with something. But that’s all it’s doing, it’s just tracking the here-now-ness.
That’s really cool! Why do we have this ability? First of all, I’m going to propose a way of thinking about this: He doesn’t use this language, but I think it will be helpful. I don’t think it’s in any way inconsistent.
This ability to do this is like salience tagging. When I touch this [bottle], I am making this here-now-ness salient to me. This here, now, is salient to me. Not the bottle, not even the flat surface, because remember I lose all of those particular qualities. All I have is the here-now-ness is salient to me.
And we do this with demonstrative terms like “this”. Notice the word "this" is not like the word "cat". "Cat" refers you to a specific thing — meow, meow, the animal that pretends to love you. (Actually, I know some cats now that I am actually convinced do actually love me. So I have to amend my usual comments about cats.) But "this" isn’t like "cat". Watch: This [pen]! This [bottle]! This [wall]! This [light switch]! It doesn’t refer to a specific thing. It picks out, it doesn’t make some thing, it just makes some here-ness and now-ness. Sorry for talking about this, but this is how we have to talk "salient" to you.
[…] Terms like "this" and "here" and "now", but especially "this": These are linguistic terms and they do what is called demonstrative reference. They do not refer to a particular thing. They do not refer to the bottle or to the marker or to the wall. All they do is salience tagging this and that.
Why is that important? Well, Pylyshyn wants you to understand FINSTing — FINSTing is obviously not a linguistic phenomenon. I’m not speaking in my head when I’m doing this. In fact, if you try and speak in your head, you’re going to mess yourself up. He is using demonstrative reference as a linguistic analogy for something you enact. I’m going to try to draw that out by calling it enactive demonstrative reference, rather than linguistic demonstrative reference, which I’ve tried to explain to you with this notion of the salience tagging of here-ness and now-ness.
The mind in contact with the world
Why is this so important? Here’s where the analogy can help me: I need enactive demonstrative reference before I can do any categorization.
If I’m going to categorize things, I need to mentally group them together. "This" is mental grouping: this, this, this [three individual pens], this [the group of the three pens together]. That’s what mental grouping is. Mental grouping is to salience-tag things and bind them together in a salience tagging.
What I am trying to show you is: Any categorization you have depends on enactive demonstrative reference. And enactive demonstrative reference is only about salience and here-now-ness.
You see, all of your concepts are categorical. That whole conceptual, representational, categorical, pictorial… all of that depends on [categorization], but [categorization] depends on something that is pre-categorical, pre-conceptual.
And you say, “But you’re using concepts to talk about it!” Don’t confuse properties of the theory with properties of what the theory is about. Of course, I have to use words to talk about it. I have to use words to talk about atoms. That doesn’t mean that atoms are made out of words or dependent on words. I have to use words to talk about anything. And I don’t want properties of my theory and properties of the phenomena of the theory to be confused.
I want a theory about, for example, vagueness to itself be clear. I want a theory about a illogicality to itself be logical. I want a theory about irrationality to itself be rational. Do not confuse properties of the theory with properties of the thing being referred to. Yes, I have to use language and concepts to talk about it, but that does not mean that the thing itself is made out of, or dependent on, concepts and categorization. I’ve given you an argument, and I’ve given you empirical evidence towards this claim, and they massively converge together.
Now notice, this is a fundamental connectedness to reality you’re getting with the FINSTing, with the enactive demonstrative reference, when you’re getting that initial salience tagging, because it’s like the mind being in contact with the world. That’s why Pylyshyn even uses the metaphor of contact.
Your mind is in touch with the world. But not through concepts and categories — these come out of the lower-level processing. The mind is more directly connected with what you experience here and now.
EnACTed perspectival knowing
Remember how relevance realization is happening at multiple interacting levels:
We can think about this where you're just getting features that are getting picked up. Remember the multiple object tracking. This, this! Basic salience assignment. […] [featurization]
The featurization is also feeding up into foregrounding and feeding back. A bunch of this, this, this, all these features, and then presumably I'm foregrounded and other stuff is backgrounded.
[Foregrounding] then feeds up into figuration. You're configuring me together and figuring me out — think of that language — so that I have a structural-functional organization. I'm aspectualized for you. That's feeding back. And of course there's feedback down to [featurization].
And then that of course feeds back to framing [or (problem) formulation], how you're framing your problems. And we've talked a lot about that, and that feeds back [to featurization].
So you've got this happening, and it's giving you this very dynamic and textured salience landscape.
And then you have to think about how that's the core machinery of your perspectival knowing. Notice what I'm suggesting to you here:
You've got the relevance realization that is the core machinery of your participatory knowing. It's how you are getting coupled to the world, so that co-evolution, reciprocal realization can occur. That's your participatory knowing.
[Relevance realization] feeds up to, feeds back to your salience landscaping. This is your perspectival knowing. This is what gives you your dynamic situational awareness, this textured salience landscaping.
This [salience landscaping] of course is going to […] open up an affordance landscape for you. Certain connections, affordances are going to become obvious to you. […] [Salience landscaping] is feeding up and what it's basically giving you is affordance obviation. Certain affordances are being selected and made obvious to you. That of course is going to be the basis of your procedural knowing, knowing how to interact. […]
We'll come back later on to how propositional knowing relates to all of this. I'm putting it aside because [the previous three ways of knowing] is where we do most of our talking about consciousness, with [salience landscaping], I think, at the core — the perspectival knowing.
But it's the perspectival knowing that's grounded in our participatory knowing, and it's our perspectival knowing… Look, your situational awareness that obviates affordances is what you need in order to train your skill. That's how you train your skills. And we know that consciousness is about doing this higher-order relevance realization —because that's what this is, this is higher-order relevance realization — that affords you solving your problems.
I need all of this when I'm talking about your salience landscaping. I'm talking about it as the nexus between your relevance realization, participatory knowing, and your affordance obviation, procedural knowing, your skill development, perspectival knowing at the core, and then what's happening in here is [featurization, foregrounding, figuration, and framing].
If that's the case, then you can think of your salience landscape as having at least three dimensions to it.
The three-dimensional salience landscape
One is pretty obvious to you, which is the aspectuality. As I said, your salience landscape is aspectualizing things. The features are being foregrounded and configured and they're being framed.
This is a marker. It is aspectualized. Remember? Whenever I'm representing or categorizing it, I'm not capturing all of its properties, I’m just capturing an aspect. This is aspectualized. Everything is aspectualized for me.
There's another dimension here of centrality. I'll come back to this later, but this has to do with the way relevance realization works. Relevance realization is ultimately grounded in how things are relevant to you, literally, how they are important to you — you “import”, how they are constitutive.
At some level, the sensory-motor stuff is to get stuff to you you literally need to import materially. And then, at a higher level, you literally need to import information to be constitutive of your cognition. […] But what you have is the perspectival knowing there is doing aspectuality, and then everything is centered. It's not non-valanced, it’s vectored onto me.
And then it has temporality, because this is a dynamic process of ongoing evolution. Timing, small differences in time, make huge impacts, huge differences in such dynamical processing. Kairos is really, really central. When you're intervening in these very complex, massively recursive, dynamically coupled systems, small variations can unexpectedly have major changes. Things have a central relevance in terms of their timing, not just their place in time.
Think of your salience landscape as an unfolding in these three dimensions of aspectuality, centrality and temporality. There's an acronym here: ACT. This is an en-ACT-ed kind of perspectival knowing. […]
Look at what it has:
Centrality is the here-ness — my consciousness is here, because it is indexed on me.
[Temporality] Of course it has now-ness, because timing is central to it. […]
[Aspectuality] And it has together-ness, unity, how everything fits together. I don't want to say unity, because unity makes it sound like there's a single thing. But there's a oneness to your consciousness, it’s all together.
You have the here-ness, the now-ness, the together-ness, the salience, the perspectival knowing, how it is centered on you.
A lot of the phenomenology of your consciousness is explained along with the functionality of your consciousness. Is that a complete account? No, but it's a lot of what your consciousness does and is.
Vervaeke is trying to give a naturalistic explanation of consciousness.2 We won't take it as far here in this series, but we now have the foundation to understand how much is happening at the edge of our self-awareness.
By the time we think in concepts and categories and words and symbols, relevance realization has already done a significant amount of work: It has already selected what we find salient in our environment here and now. It has filtered out the inexhaustible complexity of reality and presented us with a tiny sliver of it that is relevant to us, right here, right now.
To really be in touch with the world, we need to make sure that this low-level process of relevance realization works as well as it can and presents us with the “right” things.
Mirror of the Self is a weekly newsletter series trying to explain the connection between creators and their creations, and analyze the process of crafting beautiful objects, products, and art. Using recent works of cognitive scientist John Vervaeke and design theorist Christopher Alexander, we embark on a journey to find out what enables us to create meaningful things that inspire awe and wonder in the people that know, use, and love them.
If you are new to this series, start here: A secular definition of sacredness.
For an overview and synopsis of the first 13 articles, see: Previously… — A Recap.
Ah, the irony that in a post about the here and now, you need to be equipped with knowledge from the past you can find elsewhere.
Vervaeke is aware that he is presenting a scientific theory at the edge of our current scientific knowledge. In episode 32 he points out:
I’m giving a theoretical structural-functional organization for how [relevance realization] can operate. The last couple of times we had the strong convergence argument to [relevance realization]. We have a naturalistic account of this, at least the rational promise that this is going to be forthcoming. And then we're getting an idea of how we can get a structural-functional organization of [relevance realization] in terms of firing and wiring machinery.
Now, this is again, like I said, this is both very exciting and potentially scary, because it does carry with it the real potential to give a natural explanation of the fundamental guts of our intelligence. I want to go a little bit further and suggest that not only may this help to give us a naturalistic account of general intelligence, it may point towards a naturalistic account, at least of the functionality, but perhaps also of some of the phenomenology of consciousness. This, again, is even more controversial. But again, my endeavor here is not to convince you that this is the final account or theory. It's to make plausible of the possibility of a naturalistic explanation.
Later on in the same episode, he suggests:
Again, I'm going to say this again: I'm trying to give you stuff that makes this plausible. I'm sure that in specifics, it's going to turn out to be false, because that's how science works, but that's not what I need right now. What I tried to show you is how progressive the project is of naturalizing this, and how so much is converging towards it that it is plausible that this will be something that we can scientifically explain. And more than scientifically explain, that we'll be able to create, as we create autonomous artificial general intelligence.
Even though it seems that in this series I already presented most of his theory, there is a lot that I left out. My goal here is not to explain Vervaeke’s relevance realization in all its fascinating detail. This is something he does a lot better himself in his video series, and also in several scientific papers. If you want to dive even deeper to find out how plausible this all really is, I suggest starting with watching the full episodes 26 to 32 of Awakening from the Meaning Crisis and reading his scientific papers about relevance realization: