Video and transcript of my presentation at SPLASH Onward! 2024
Conference presentation of my essay “A New Cognitive Perspective on Simplicity in System and Product Design” at SPLASH Pasadena 2024.
On Thursday, 24th October 2024 I presented my Onward! Essay at SPLASH 2024 in Pasadena, California. Here’s the video of my presentation as well as the (AI generated, but slightly manually edited) transcript.
The transcript is long and I included most of my presentation slides as pictures. This means that if you receive this as a newsletter in your email inbox, it will likely be truncated. So please make sure to click through to the full version on the website.
Video
Transcript
So, simplicity.
What is that? What does simple mean?
I guess we understand these words because we use them all the time and I heard them here in the conference already many times.
“Oh yeah, that was simple.”
“Oh, that was particularly complex.”
But what do we mean by that?
I mean, we kind of know what we mean, but it's more of a feeling. It's like we have some sort of intuition about it. We know it when we see it.
But how do we describe simplicity?
How do we define it?
Somehow that turns out to be really difficult, doesn't it?
So here's a definition.
I guess I forgot to copy that over from the dictionary, but doesn't matter.
Also who does that these days?
It's like 2024. It's ridiculous.
So I didn’t… well actually I did… but I can assure you there's no slide in here that's been generated by AI.
Background
So maybe a quick little bit of background.
I'm a software engineer, it's probably not surprising here.
I care about these things.
And a few years ago I decided that I want to become a founder and in the tools for thought space.
And I was thinking the usual approach, the usual tech approach, is that you look at the technology that's available, probably find something that has just become available.
It's very exciting and new.
And then you ask yourself, okay, so what can I do with it?
And I kind of wanted to do something a little bit different.
So I thought if I want to create some software that helps people think, maybe I should spend a little bit of time figuring out how thinking works.
And this is when I discovered a lot of cognitive science, which is basically an integration of all these fields.
And this helped me understand my obsession with human experience that I already had as a software engineer and somebody interested in software design.
I was always particularly interested in interaction design and in user experience.
And so studying cognitive science helped me expand my view from what I would now consider a pretty limited engineering perspective to a broader cognitive one.
And this also shifted my perspective on the topic of simplicity.
And what I want to do today is I want to share five perspective shifts that I went through that are all grounded in this idea that simplicity must be related to our sense making, how we make sense of our world and how we understand things.
Because we ultimately consider those things simple that we can understand easily.
Coexistence
Okay, let's start with coexistence.
My old perspective on simplicity was pretty straightforward: If you have a complex thing and you want to make it simple, you must get rid of the complexity, right? You remove it and that's worthwhile to do and it's valuable and it's important, so we should do that.
But that's not always true, is it?
I noticed that sometimes I really appreciate complexity. For instance, what makes a good movie?
For me, a good movie needs to have a good story. It needs to be interesting. If it's obvious, then it becomes predictable and then why would I watch the movie? And I want there to be challenges for characters that they are facing and need to resolve. There should be some surprises, some twists, some unexpected things that are happening and the characters should develop over time. They should change. So there's a lot of stuff going on. There's a lot of complexity there.
Of course, I don't want the opposite. I don't want it to be convoluted and overwhelming and lots of details thrown at me that I can't make sense of.
In really good movies, I think, they figure out a way to have enough detail that you can watch them again and you discover interesting things that you didn't see before, but they were not in the way when you watched it the first time.
So what if complexity is actually what makes things interesting?
It just needs to unfold in this comprehensible way so that we're not confused. I think this applies to stories, either in movies or a few read novels. This applies to music. It applies to games, to art.
There is usually this process of disclosure that is going on as we are experiencing those things. And it helps us to become familiar with all the complexity that is there, but without being overwhelmed by it.
Okay, so of course, at first I thought this must be about balance then, right? It's like a fine balance between we need to hit the sweet spot. We have to still reduce complexity or keep it low so that we can achieve more with less and we limit ourselves so we don't create those convoluted messes that nobody can understand.
But then I wondered, if complexity unfolds in an intelligible way so that we can follow, is there a limit? Where is that limit?
Here's an example from video games: I'm a huge Zelda fan. I played Breath of the Wild during the pandemic which helped me stay sane. And now I play Tears of the Kingdom. These are very complex open world role-playing games if you're not familiar with them. I think they are brilliantly designed and the whole fun is to discover their complexity. There is a story that you can follow which is pretty straightforward but there is so much more that you can do in those games. And you can just explore the world and there is endless stuff to do. So they are really complex.
Can things be complex and simple at the same time?
What if simplicity is not the absence of complexity but the superior organization of it?
And if that's the case then more important than avoiding or reducing complexity might actually be how we organize it.
Another way to visualize this is that my thinking went from a very simplistic one-dimensional spectrum with simple on one side and complex on the other, where you just look for the sweet spot somewhere in between, to a more sophisticated two-dimensional model, where we now also have a dimension that measures how well organized the complexity is.
On the horizontal axis we have basically how easy is it to describe and perhaps to make the thing. I call this mechanical complexity. On the vertical axis we have how easy is it to understand it to enjoy it perhaps to use it. I call this experiential complexity.
This still gives us the classic categories where something is simple on both accounts, which is trivial, perhaps boring, perhaps useless. And we also have the opposite, where something is complex on both accounts. Where it's very intricate, confusing, and it might be very powerful, but it's very hard for us to figure out how to make use of that power.
But now we have these two other quadrants that are very interesting: We have one where something is easy to describe but hard to understand. Math, probably, for most people. But we also have this other one which is the opposite of that, which is like something that's very hard to describe or make, but very easy to understand. This is where I believe all those great movies, stories, art, and games live. And this is also where I believe good software with great user interfaces and user experience lives.
And the way those things manage to accomplish this is that they are hooking into stuff that we are intimately familiar with. This could be either direct experience, or you've encountered something before, so now you're familiar with it and so you kind of know and it becomes easier for you to understand. But it can also happen indirectly through something like metaphors or analogy.
And this is what gets interesting: We use this of course in computing with user interfaces where buttons look like buttons from the real world so we could transfer some knowledge over. Kinetic scrolling is maybe a more recent example that we do all the time on our touchscreen devices, which taps into our familiarity with physics because we live in a physical environment and this just feels very natural.
You don't really have to explain to people how to scroll but now you have to simulate physics on the other hand. So there is this thing with great user experiences where we often have to buy the experiential simplicity with mechanical complexity. We have to do a little bit more work on the making side to make it easier on the using side.
There's tons of research on cognitive processing. I picked a source here that I'm pretty sure many of you are already familiar with, because it has become so popular. This is Daniel Kahneman's Thinking Fast and Slow where he talks about two kinds of cognitive processing.
There's System 1 and System 2. System 1 (here [on the slide] on the right) is intuitive. This is where we experience something that comes to mind instantly and it just makes sense. This is fast, this is automatic, we use it all the time. It can be emotional, it relies on stereotypes and most of the work happens unconsciously. We basically just get the result of this process surfaced in our conscious mind and we use intuitive processing to make sense of something intuitively. This is what I think is related to experiential complexity. Something is experientially simple, regardless of how difficult to describe or make it is, when we instantly get it.
System 2 is analytical, when you actively think through something or reason about something. It's slow, it's effortful, we use it only when we need it. It's based on logic, it's calculating, it's mechanical, almost algorithmic. And it happens very consciously because we have to be really concentrated, we have to be focused. We use our analytical processing to describe and define things and this is of course related to mechanical complexity. Something is mechanically simple if it's easy to describe or make.
The idea behind those two systems is that we have both of them, because our intuition is super fast and we use it all the time but sometimes wrong. And so we have this other system where it can analyze the situation and then potentially correct our intuition.
This means that we can now truthfully state that something incredibly complex (mechanically) can at the same time be amazingly simple (experientially). This means simplicity and complexity can coexist in the same thing.
Relationality
Let's talk about relationality.
I've now pulled the observer into the model of complexity. That means I've just tied complexity to cognitive processing and therefore I also have tied it to subjective experience.
Now, obviously, we try to usually avoid subjectivity in favor of objective properties that can be verified independently of any observer's particular abilities. That's a problem.
On the mechanical complexity side, where we have simple things that are easy to describe and reason about analytically, we can kind of get around this. We can find objective universal definitions for mechanical complexity, which is practically what we have done in the past when we were trying to define complexity, which is usually based around something like the number and kinds of components and the number of connections between them. So we're kind of okay here.
But then we have this other thing, where simple things can be coherent and intelligible so that they make sense to us intuitively and this is rather subjective. This is tied to the specific cognitive processing of a particular observer and this means that it's tied to how familiar this observer is with certain concepts that are needed to make sense of it. How do we deal with that?
Well, it turns out that in psychology and design this has been identified as a problem before. So I want to talk to you about affordances a little bit.
What's an affordance?
This could be an affordance. This is supposed to be a door handle… just making sure.
If you see this attached to a door in the real world then it's obviously an actual real physical structure. There's an arrangement of atoms that you identify as a door handle, but that does not make it an affordance. Because the other thing that's needed for it to be an affordance is: There needs to be a subject that perceives this particular arrangement of atoms in the environment. And this observer needs to have a subjective, phenomenal, and mental experience because then it is the hand that we can use to actually manipulate the door handle. It doesn't really make sense without that and it's not really an affordance without that. It becomes an affordance through our cognitive processing and our sensory motor capabilities to interact with it.
It's neither purely objective but it's also not purely subjective. It relies on the relationship between object and subject. And this is what I mean by relational.
James Gibson says that an affordance is equally effect of the environment and effect of behavior. He also writes about how this cuts across this dichotomy of objective and subjective. I would argue that simplicity is analogous to that. It's also relational because it's grounded in this relationship between object and observer.
Now let's look at this in a little bit more detail and let's look at the objective, mechanical complexity side first.
This is the mechanical complexity: We detect physical structure and if we want to make something mechanically simpler we have the option of modifying the organization of the structure. For instance, we could reduce the number of elements and that hopefully makes it simpler. But the point here is that we can change the thing. We can change it. And this means that applies universally, because if we change the thing it doesn't matter which observer comes along — they will all have the same new experience of the changed thing.
On the other hand, the subjective part of experiential complexity means that we're projecting our own mental experience on this. And this means that we're projecting our familiarity on the thing.
If we want to make something experientially simpler what can we do? Well, we have to become more familiar with the thing. We can learn about it. We can practice with it. We can grow our understanding of it. But we have to change us. And this is the thing that does not really scale, because it's always particular, because every observer has to go through this process by themselves.
I want to relate this more to the our industry: If you think back to what you heard about the 1960-80s, just before the commercialization of personal computing, basically the first personal computing systems did not really make a distinction between users and programmers. If you were a user, you were the programmer, and vice versa. There wasn't a distinction, really. These systems have been designed with this thought in mind or without the thought of distinguishing between them. You were kind of expected to build your own tools. And as soon as you switched on those computers you were dropped into a very limited programming environment, and you were expected to make the thing that you wanted to use.
But then we figured out a clever trick: We of course knew before that we can use things without really understanding how they work. We rent houses, we drive cars, we use computers today, and we don't have to build them ourselves, if we don't want to. We can just buy them and have them built by somebody else.
So we had this really good idea of separating users from makers. And the good thing for the user is that they can ignore a huge chunk of mechanical complexity that they just don't need to be bothered by. They instead just have to become familiar with the interface. This is usually orders of magnitude easier, especially today, where all things are really, really complex. And then we can enjoy utility, convenience, automation — all these nice things as users.
On the other hand, makers can specialize on this mechanical complexity. They can become experts at implementation. This is where we can utilize our analytical abilities to the fullest, so we can figure out how to deal with all this mechanical complexity. And we can live happily ever after and have all this useful stuff. Most of us don't need to know how to build it. Most of us don't even need to know how it works. And we can all enjoy convenient utility at scale.
Feedback
Let's look at feedback.
We have doubled down on mechanical complexity. Look at all the progress that we have made! (That's just an arrow that's representing progress… don't read too much into it.)
At least we can look back at three centuries of progress maybe, if we start with the scientific revolution, and we had an industrial revolution, and we figured out computers, and today… now… we live in a world where we can't put away our devices and software is eating the world.
But we didn't just invent technology. We also have created powerful incentives where we have commercialization and capitalism which created this powerful flywheel where we can now manufacture and distribute pretty much anything pretty much anywhere. And now we live in this consumerism world where it is so much more convenient to just buy a new thing instead of repairing a broken one. That's what I mean by progress: We created so much value and aren't we so much better off today?
A lot of this progress rests on a single assumption. This assumption could be called “Descartes’ one weird trick”. He basically said, we can consider everything as a machine, because we can take it apart, we can analyze it, and then we can put it back together. This is of course looking at the mechanical complexity of things exclusively. It turns out that we have become really good at this.
We have become so good at this, I would argue, that this has now snowballed into a worldview, and perhaps the default worldview that we all have. It's so deeply embedded in our culture that this is practically what we mean when we talk about progress. It's that we can fabricate convenient utility at scale. And if you listen to the people that make a lot of money from this, they can tell you that there is yet more convenience and yet more automation to be had, because everything can be automated eventually, because everything is a machine — perhaps including us.
Here's what Christopher Alexander thinks about it. It's a long quote that I didn't want to cut from this presentation, but we don't have enough time so we will skip over this. If you're watching this as a video, pause it now! And if you're in the live audience, I'm sorry, reality sucks sometimes.
The gist though is that the mechanistic worldview relies on this idea that everything can be a machine. It's very goal- and results-oriented, because that's what machines are — they are built for a particular thing, for a particular outcome. We reframe everything as a problem that needs solving. And we're obsessed with productivity, with efficiency, with metrics, and with scale.
Notice how well this fits our analytical processing: We can reason and predict logically along causal chains. We create models of reality with the aim to control and manipulate our environment. The way we deal with surprises is that they are seen as failure, because they usually indicate that our models are not accurate and so we need to go back to the drawing board and figure out a better model. I also think that this worldview has given us quite some overconfidence in our ability to predict and plan things.
Now what could an alternative view look like that maybe fits our intuitive processing a little bit more? I call this the developmental worldview. (In the essay I'm actually talking about systemic, because this has been very much inspired by dynamical systems theory. But I think today developmental is perhaps a better way to understand this.)
The developmental approach is process-oriented because the goal might be unclear. We might not know what the goal is yet. The point is to figure out and make sense of the world. We're more interested in accuracy and fitness, in efficacy. And we follow hunches and we make mistakes along the way, but we do grow in a developmental sense. We grow our understanding and we become familiar with things. This of course requires feedback and this is why this section is about feedback.
We know that our understanding is incomplete and so we know that we need to discover and explore things. We also realize how limited we are in controlling things, so that we rather seek to influence and transform instead. And the way we think about surprises in this process is very positive, because surprises often cause important insights — and this is a clear indicator that we're actually learning something.
I'm obviously an industry guy, so I might not be an expert on this, but this looks like science to me. Because you're not trying to solve problems, you're trying to make sense. You're not publishing papers to appear productive, you don't care about metrics like citations. It's all about the quality of the theory. And when you apply for funding, then they know that you have to discover and explore things, so they don't ask you to reason and predict outcomes and schedules. Right? This is how science works, I hope.
From an industry perspective I have to say: Despite our best efforts, we also have figured out at some point that we can't really ignore the environment completely. And we came up with incremental and iterative approaches or agile methodologies. I think they nudge us in a good direction, but I don't think we have changed our mindset at all. I think we're still very much stuck in this mechanistic paradigm. It's just now that we have time-boxed sprints and story points as well.
I really believe that we cannot reach simplicity mechanistically. We have to rely on deep understanding. We need to explore the space. We can't really plan our way to simplicity. We need to learn, we need to grow, we need to become familiar with the details. And we have to have those surprising insights that may help us figuring out simpler things.
Fitness & Adaptation
Okay the last two I want to cover together and I want to specifically look at the software industry here so I talked earlier about this distinction between user and maker and where the main benefit is that users don't have to deal with all that mechanical complexity.
In the software industry we have taken this even further: We can now scale our capacity to deal with mechanical complexity by selectively ignoring it.
We isolate components and modules. And then we can apply this divide and conquer approach to them. Which means that we now have other team members, other teams, or perhaps even other organizations just selectively focusing on one component and being able to ignore all the others. And then we also look for reusability, so that we can do something once and then we can reuse it in different contexts. All these things taken together enable this impressive scale that we have reached.
But I want to point out that all the complexity is still there. We're not removing any. I would argue that we are even adding a lot more to it, because we're just hiding it behind interfaces that get wrapped in perhaps reusable components. And then we end up in these complex dependency networks, which at this point, of course, transcend organizational boundaries, and we get caught up in business models that are trying to extract value from this dependence.
And the things that we depend on — all those components — are either completely opaque, and we just accept that. Or they are theoretically transparent. What I mean by that is: Yeah, it is great to have access to source code and to look at the implementation. But it's pretty worthless if you're not familiar with it. And becoming familiar with it is the hard part.
What I'm basically saying is that we have created this culture where we are cultivating ignorance. And it's okay that you don't know how most of your technology stack works. Don't worry about it, just keep shipping features.
And what can I say — we talked about this a lot today — it only gets even better. Generative AI is coming, or it's already here.
You don't know the framework? Or you don't know the programming language? LLMs have you covered. They can write the code for you. Perfect. So we can now understand even less, we can ignore even more, and we can yet be more productive.
How this will of course impact the quality and simplicity of the solutions that fall out of this process, I'm not sure about that. But I would bet that nothing is getting any simpler, everything is just getting more convenient. But let's not confuse convenience with simplicity.
Let's talk a little bit more about the mechanistic perspective. And of course we are interested from that perspective in modularity and scale, and the products and the software that we build we want that to scale, right?
This means it needs to be somewhat generic, universal, and context independent. It needs to work in general, and we want to address a large market with that. So it will never be quite perfectly adapted to us. And we have to accept that we will have to adapt to it. It is just good enough. It is not perfect.
This is what I would consider tools to be. We talk a lot about tools in software, but I think we use that term for a lot of things that I don't believe are tools. Tools for me are fundamentally mechanistic. This is a good thing! We want tools to be like a machine. We want them to be precise, reliable, and consistent. And we adapt to them happily by understanding what they are for, and then we can use them appropriately for whatever context we're in. We also want to have several tools available to us, so that we can choose the right one for the right job. And that they are somewhat static and closed and best for one use case is actually a good thing, because that's how we choose them for a category of tasks. Yes, you can use a hammer for a lot of things, but it just works best with a nail.
Tools that do all these things pretty well have the chance to become popular. This is the measure of success for this mechanistic perspective.
But of course there is the developmental one. And in this perspective we value things that adapt to us and become very specific and particular, perhaps even unique to us. Where we are not talking about just good enough but about exactly right.
Every realistic answer to a question usually begins with the words “It depends…” This is what context sensitivity is. This is situational awareness. This is about respecting the environment, and this is why I like to make this very clear distinction between tools and environments.
First of all, we use tools but we live in environments. A lot of software that we use today I would say we live in. They are not tools, they are environments. We should get this distinction right, because good software environments allow us to choose different paths to reach a goal, they are ambiguous in that way. And that's good, because everybody can choose their own path to do what they want to do. They offer surprises, they have functionality that maybe we haven't expected to be there. They are flexible, they are configurable, customizable, and they can become unique to us. They are open to extension. And great environments make us feel like we belong there.
We've become so familiar with them, and in a way they have become familiar with us, because we have made them adapt to our unique requirements, that the measure of success for those environments is not popularity, but it's belonging. Maybe keep that in mind next time you find yourself in a debate about vim or Emacs.
Tools absolutely have a place in environments. We buy power tools and we arrange them in our workshop. I don't think we need more tools necessarily. We need the right tools. And the right tools can then become part of this environment that we're in, that we want to live in. And that's an environment that fits our needs, that adapts to our requirements. And it's one that we understand, that we can become familiar with, that we want to become familiar with, that is worth becoming familiar with, and which is perfectly adapted to us.
I'm worried that we losing our balance here, because we are missing the forest for the trees. We're kind of missing the environments for all the tools that we constantly talk about.
Also ignoring a lot of the classic papers that have already tried to make us aware of how important understanding our environment is.
It's just a quick one: “Programming is about insight, about forming a theory.” From Programming as Theory Building.
“Simplicity is the most important consideration.” From Worse is Better.
“Complex systems reflect our immature understanding.” From A Big Ball of Mud.
“The incomprehensible should cause suspicion not admiration.” From A Plea for Lean Software.
“Understanding a system is a prerequisite for avoiding complexity problems.” From Out of the Tar Pit.
And there's lots more, but you can see I run out of space.
All of these point towards the importance of developmental growth, that we need to take time to explore. We need to take time to become familiar with things. We need to look under the hood, beyond the interface, and we need to understand how things work. It's ultimately essential for discovering simpler solutions.
I'm worried that we have convinced ourselves that we found a shortcut by cultivating ignorance. I don't believe that there is a shortcut.
This is how I think about simplicity today and there's so much more to explore and discuss. I came across so many fascinating ideas in cognitive science that I believe can help us with this more systemic, developmental understanding of ultimately unpredictable but adaptive dynamical systems. Because cognitive science is trying to understand the most unpredictable dynamical system of all — us.
Of course I had to limit myself today here so that I don't overwhelm you with complexity. I hope I managed that I have unfolded a comprehensible story that you were able to follow. You'll find a little more depth in the essay, and I also have a Substack where I write about this.
But there's one question that of course remains: What can we do?
To try to give an answer to that question I want to come back to the very first perspective shift that I talked about, about coexistence — that simplicity and complexity can coexist. I believe that everything has to start here.
Today I've been communicating to you in literally black and white terms. I wanted to polarize to clarify, and I wanted to clearly differentiate the options that are available to us.
But not once in this presentation was anything on the black side bad! If you missed that detail I encourage you to watch it again.
This is not about choice. This is not mutually exclusive. To become good at one of those things doesn't mean that we can’t also become good at the other. However, at the moment of course I believe that we are orders of magnitude better at one of them.
They don't compete, they complement, they need each other. I think we need both, and we need to integrate them.
Alan Kay yesterday in the interview was talking about a sign at Bell Labs. I believe it said something along the lines of: “You either do something useful or you do something beautiful.”
Why not try both?
There is beauty in simplicity I believe.
I really believe that we have this capacity in us. We're using it all the time. It's part of our humanity. It's just so intuitive that we are not consciously aware of it. So let's not lose faith that we have something to offer that machines don’t.
I'll leave you with this, which I've taken from one of my posts on Substack. If anything today that I said resonated with you, then please check out the Substack and please reach out either through Substack or email. I'll be on the discord and of course I'm happy to take some questions now. Thank you.
Mirror of the Self is a series of essays investigating the connection between creators and their creations, trying to understand the process of crafting beautiful objects, products, and art.
Using recent works of cognitive scientist John Vervaeke and design theorist Christopher Alexander, we embark on a journey to find out what enables us to create meaningful things that inspire awe and wonder in the people that know, use, and love them.
Series: Mirror of the Self • On simplicity… • Voices on software design
Presentations: Finding Meaning in The Nature of Order