09 • Profoundness
If we can't have certainty, we can settle for plausibility, and still find what is profound.
As Martin Luther and René Descartes try to reconcile the recent scientific discoveries of their time with their worldview, they come to contrary conclusions. Spirituality and rationality are pulled apart and pushed to both of their extremes — pure love or pure reason, blind faith or absolute certainty.
The failure to achieve certainty
On the one hand we have the cultural grammar of Luther, and the narcissism, and the radical self-doubt. And on the other hand we have the Cartesian grammar, we seek certainty, we won't believe anything until it's certain. And of course we vacillate between "I must accept it without any evidence or reason" (Luther) and "I could only accept it if it's absolutely certain and beyond question" (Descartes). And both of these, of course, are pathological.
The first is pathological because if you completely remove peoples' agency in how they come to their beliefs, then you radically undermine any meaning in life they might possess. The other one, the pursuit of certainty… and there are individuals who seem to speak as if mathematical science will still give us certainty. That's an illusion. Part of what we discover after Descartes, and Descartes was also surprised in that people ended up disagreeing with him, is that science doesn't and can't provide certainty.
These two equations I put up on the board,
F = ma
andE = mc^2
, one is from Einstein and the other from Newton. What Einstein showed is things that Newton thought were certain, absolute space and time, these kinds of formula actually don't possess the certainty that Newton thought they did. We talk a little bit later about why we can't, except in very limited contexts. There are deep, deep reasons why we can't pursue certainty. And therefore we can't seek certainty as the solution to the loss of connection, connection to ourself, connection to the world, connection to other minds.Again, this radical irony, it's very similar to Luther. Why does Descartes' attempt to address this burgeoning loss of connections, why does it actually result in exactly the opposite, an increased sense of disconnectedness?
Well, part of it of course is the failure of the project of certainty. You can understand the 18th, and the 19th, and especially the 20th centuries as scientific, historical, and philosophical undermining of the idea that we can achieve certainty. Of course, one of the great principles of modern physics is the Uncertainty Principle.
Hold on… what is science about, if it’s not about certainty?!
Let’s start here: What exactly is certainty?
Logical and psychological certainty
What has happened in Descartes, by the time of Descartes, is we've seen this slow withdrawal — everything is withdrawing from the world into the mind. The mind is getting isolated, trapped inside of itself. And then Descartes famously worries about that. He says, “I want to doubt everything, try to find something I cannot doubt”. […] He makes a mistake about this notion of certainty.
There's two notions of certainty: There's a logical notion and a psychological notion. The logical notion of certainty is something like absolute deductive validity: It's impossible for the premises to be true — impossible — and the conclusion false. That's different from psychological certainty. Psychological certainty is an inability to doubt. So you find something certain because you are incapable of doubting it. The problem is these are not identical by any means.
Think of the radical bigot. The radical bigot — I am not, I hope, such a person — but the radical bigot cannot doubt certain things. They cannot doubt the superiority of the white race or some other such garbage. They're psychologically incapable of doubting it precisely because of the depth of their ignorance and bigotry. So they have psychological certainty, but it is certainly not logical certainty.
There is no direct connection between psychological certainty and logical certainty. But what Descartes does is he thinks that if he pushes [psychological certainty] far enough, it will somehow become identical to [logical certainty]. And it never does. And that's part of the problem we face.
We know how to produce the logical version of certainty — through deductive reasoning. If we stay in the confined framework of logic and math, we can prove propositions to be either true or false, with certainty (in many cases, though even in logic and math there are exceptions). Formal correctness has the highest requirements for completeness and absolutely clear definitions. There is no room for ambiguities.
But many problems we face are not at all clearly defined, they are ambiguous. A logical proof of certainty for those is beyond our capabilities, beyond our understanding of those problems. In those cases we turn to science to give us something that is not certainty, but almost as useful.
Plausibility
Plausibility is central to your notion of how real things are.
Now there's two senses of the word plausible: One is a synonym for “highly probable”. That's not the sense I'm using. I'm using it in the sense that Rescher and others made famous where this means “makes good sense”, “stands to reason”, “should be taken seriously”.
Most of the time […] you cannot base your actions on certainty, but you have to rely on plausibility. […]
We regard a particular proposal, or a construct, or some way of trying to model the world as trustworthy, if it's been produced by many independent but converging lines of evidence.
Let me give you a clear, concrete example: You will regard as more real information that comes through multiple senses as opposed to one sense. If I'm only seeing something, there's a good chance that it's an illusion or a delusion caused by the subjectivity of my seeing. But if I can see it, and touch it, and hear it, and smell it, then the chances that each one of those independent senses are producing an illusion is radically diminished. The fact that they all are telling me the same thing, now that doesn't give me certainty, but it gives me trustworthiness. That's what trustworthiness is — it reduces the probability that I am self-deceived. […]
This is why science likes numbers. We like numbers because they allow us to converge the senses. Look, you can see three, you can touch three — one, two, three — you can hear three [claps 3 times]. We like numbers not because we're fascists or something in science. We like numbers because numbers — quantification — help us to increase the trustworthiness of our information gathering. They allow us to reduce the chance that what we're getting, what we're measuring, what we're modeling, is being produced by self-deception.
Empirical sciences are based on trustworthiness and plausibility. While the scientific method cannot guarantee certainty, it is a self-correcting process for plausibility.
Science = self-correcting plausibility
You can't get certainty for almost all of your processing. You have to rely on plausibility, all the time. We say, "But I can turn to science — science will give me certainty."
First of all, pay attention to the history of science. When has it ever done that? Almost all of the theories that have been proposed in science have all ultimately turned out to be false in some significant or an important way. Science isn't believed in because it gives us certainty or facts. Science is believed in because it gives us self-correcting plausibility.
How do I decide what hypothesis to test? I don't test any hypothesis I come up with. I wonder if clipping my toenails will reduce famine in the Sahara? Let's test it out! I wonder if I gather enough frogs together, can I influence the Australian election? Let's test it out! And you say to me, “That's…” — what? — “…ridiculous. That's absurd!” What you're saying to me is those hypotheses don't make sense. They don't deserve to be taken seriously. What you're saying to me is, "I reject them because they're implausible".
Now I go into my experiment, I'm going to run an experiment in science. What do I have to do? I have to control for alternative explanations. What we're always doing in science is inference to the best explanation. This goes to the work of Peter Lipton and others. Here's some phenomena. What I do is I have some candidate explanations for what's causing the phenomena. And then what I do is I put them into competition with each other. Which one of my hypotheses best explains it? And the one that best explains it is chosen as what's real.
How would you make this certain? The way you would make this deductively certain is you would have to check all possible explanations. How many possible explanations are there? An infinite number. You can't ever make science certain because you're always doing this. This explanation is only as good as the competition it beats. In science, you advance by coming up with plausible alternative explanations that you beat with yours.
Science depends on plausibility judgments. It depends on plausibility judgments when we choose our hypothesis. It depends on plausibility judgments when we choose what variables we're going to control for in experiments. It depends on plausibility judgments once we're done and we have the data and we have to interpret it. What is the number of interpretations I can give for any data? Infinite in number. What do I do? I generate the most plausible interpretation. Before the experiment, during the experiment, and after the experiment I'm relying on plausibility. Plausibility is indispensable. That's why your brain looks for it.
We will come back to how our brain looks for and makes judgements of plausibility and why this is relevant.
But let’s first complete this model of plausibility by adding a few other aspects to it, and see how useful it is.
Elegance = multi-aptness
So we're converging to some processing state here, but we also want something to be the case. Because we're not just looking backwards into how we got there, we're also looking forward to what we can do with it.
What we want is we want a model that we can now apply to many new domains, that will open up the world for us, that's multi-apt. This is like taking a martial arts stance. I don't use this [stance] but I'm taking this stance because I can quickly adapt it to many different situations. It's multi-apt, it's highly functional.
So, why do I want this? When I can use the same model in many different places, this is I would argue, what people mean when they say a theory or a model is elegant. You can use the same model. It's adaptive enough, it's multi-apt, that you can use it in many different places and apply it. So you have convergence for trustworthiness, but you have elegance for power, for multi-aptness, for multi-apt application.
Is that enough? No. I think this state has to be highly fluent to you. This has to be one that you can use readily, powerfully for yourself, that you can internalize.
Neither trivial nor far-fetched
When you have this, when you have fluency, convergence, elegance, you need one more thing: you need a balance between the convergence and the elegance.
If I have a lot of convergence without much elegance, that's triviality. The thing about trivial statements is not that they're false, they're true, but they're not powerful. They don't transform. Many times we reject things, we don't take them seriously, they don't make good sense to us, precisely because they're trivial.
What's the opposite? Very little convergence with a lot of promise of power. This is when things are far-fetched. Conspiracy theories have this feature. If they were true they would explain so much. If we would just accept that the British Royal Family were lizard beings from outer space we could explain so much of their behaviour. But the problem is, although that would be a very powerful explanation, we have very little trustworthy evidence that that is in fact the case.
Profoundness = convergence + elegance + fluency
So what we want is, we want that — as Milgram says — our backward commitments, and our forward commitments… We only commit powerfully forward, if we've got a lot of trust in the model that we've produced. When all of this is in place, I think we find what we're processing not only fluent, we find it highly plausible. When we have very deep convergence and very deep elegance and very efficient fluency, I think we then find the proposal profound.
In summary, something is profound, if it combines all these properties:
plausible
convergent: produced multiple independent lines of investigation (otherwise far-fetched, e.g. conspiracy theories)
trustworthy, can be taken seriously, reduces bias
elegant
divergent: useable in many different domains (otherwise trivial)
multi-apt, flexible, powerful (often described as “simple”)
fluent
familiar, readily usable, internalized
Profoundness in programming
I think this model of profoundness can also be extremely useful when thinking about programming. How do we decide what programming language, algorithm, library, or framework to use? Can we have certainty that we picked the best one? Not without looking at all of them. How many are there? Clearly too many to look at all of them.
What about picking one that is plausible? One that can be taken seriously to solve the problem at hand, and one that multiple independent sources converge on as a good solution for that problem?
And what about picking one that is also elegant? One that is powerful and multi-apt and can be applied in many different domains?
And what about picking one that we are also fluent in? One that we are familiar with, such that we understand how to use it effectively?
Combining plausibility with elegance and fluency seems to be a useful strategy here too. Next time you listen to a presentation about “The Best Architecture”, don’t fall for one that is far-fetched, because you only ever heard of it from this one person on stage that it worked well for them. Don’t fall for the one that is trivial, because it’s only applicable for this one very particular use case. Make sure you find a good balance of trustworthiness and multi-aptness, so you can decide if it is worth becoming familiar with and fluent in it.
It’s so easy to confuse certainty with plausibility, and accidentally mistake something as certain, when it is hardly even plausible. Especially in software development we are quick to argue from an intuitive standpoint of certainty, because software and algorithms seem so close to deductive reasoning, that everything we discover looks like stepping stones on the path towards certainty and absolute truth.
But all we have — even in software development — are hopefully plausible and sometimes profound solutions to complex problems that we can never be quite certain of. Because there is always a chance that what we come up with and are so convinced of, is severely biased and a product of self-deception.1
If you are new to this series, start here: A secular definition of sacredness
One of the reasons why scientists are at a disadvantage in our current times. Any serious scientists will never claim certainty and always leave the option open that their model or theory will be replaced by a better one later — which effectively means theirs was wrong, even though it could turn out extremely useful (see Einstein vs. Newton).
Claiming certainty is the domain of the bullshit artist, who doesn’t really care if something is true or false or plausible as long as it is in any way beneficial to them. If they can deceive you into not caring about plausibility either, they have already achieved what they set out to do.